Automating Initial Guess Generation for High Fidelity Trajectory Optimization Tools
NASA Technical Reports Server (NTRS)
Villa, Benjamin; Lantoine, Gregory; Sims, Jon; Whiffen, Gregory
2013-01-01
Many academic studies in spaceflight dynamics rely on simplified dynamical models, such as restricted three-body models or averaged forms of the equations of motion of an orbiter. In practice, the end result of these preliminary orbit studies needs to be transformed into more realistic models, in particular to generate good initial guesses for high-fidelity trajectory optimization tools like Mystic. This paper reviews and extends some of the approaches used in the literature to perform such a task, and explores the inherent trade-offs of such a transformation with a view toward automating it for the case of ballistic arcs. Sample test cases in the libration point regimes and small body orbiter transfers are presented.
Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G
2016-08-01
The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.
NASA Astrophysics Data System (ADS)
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
Stochastic approach to data analysis in fluorescence correlation spectroscopy.
Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo
2006-09-21
Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).
Finite-difference solution of the compressible stability eigenvalue problem
NASA Technical Reports Server (NTRS)
Malik, M. R.
1982-01-01
A compressible stability analysis computer code is developed. The code uses a matrix finite difference method for local eigenvalue solution when a good guess for the eigenvalue is available and is significantly more computationally efficient than the commonly used initial value approach. The local eigenvalue search procedure also results in eigenfunctions and, at little extra work, group velocities. A globally convergent eigenvalue procedure is also developed which may be used when no guess for the eigenvalue is available. The global problem is formulated in such a way that no unstable spurious modes appear so that the method is suitable for use in a black box stability code. Sample stability calculations are presented for the boundary layer profiles of a Laminar Flow Control (LFC) swept wing.
Determination of an Optimal Control Strategy for a Generic Surface Vehicle
2014-06-18
paragraphs uses the numerical procedure in MATLAB’s BVP (bvp4c) algorithm using the continuation method. The goal is to find a solution to the set of...solution. Solving the BVP problem using bvp4c requires an initial guess for the solution. Note that the algorithm is very sensitive to the particular...form of the initial guess. The quality of the initial guess is paramount in convergence speed of the BVP algorithm and often determines if the
The Double Star Orbit Initial Value Problem
NASA Astrophysics Data System (ADS)
Hensley, Hagan
2018-04-01
Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.
Effective calculation of power system low-voltage solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overbye, T.J.; Klump, R.P.
1996-02-01
This paper develops a method for reliably determining the set of low-voltage solutions which are closest to the operable power flow solution. These solutions are often used in conjunction with techniques such as energy methods and the voltage instability proximity index (VIPI) for assessing system voltage stability. This paper presents an algorithm which provides good initial guesses for these solutions. The results are demonstrated on a small system and on larger systems with up to 2,000 buses.
Integration of social information by human groups
Granovskiy, Boris; Gold, Jason M.; Sumpter, David; Goldstone, Robert L.
2015-01-01
We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy but with information about the decisions made by peers in their group. The “wisdom of crowds” hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0 and 100% (e.g., ‘What percentage of Americans are left-handed?’). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move towards the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. PMID:26189568
Integration of Social Information by Human Groups.
Granovskiy, Boris; Gold, Jason M; Sumpter, David J T; Goldstone, Robert L
2015-07-01
We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy, but with information about the decisions made by peers in their group. The "wisdom of crowds" hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0% and 100% (e.g., "What percentage of Americans are left-handed?"). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move toward the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. Copyright © 2015 Cognitive Science Society, Inc.
Huff, Mark J; Yates, Tyler J; Balota, David A
2018-05-03
Recently, we have shown that two types of initial testing (recall of a list or guessing of critical items repeated over 12 study/test cycles) improved final recognition of related and unrelated word lists relative to restudy. These benefits were eliminated, however, when test instructions were manipulated within subjects and presented after study of each list, procedures designed to minimise expectancy of a specific type of upcoming test [Huff, Balota, & Hutchison, 2016. The costs and benefits of testing and guessing on recognition memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 1559-1572. doi: 10.1037/xlm0000269 ], suggesting that testing and guessing effects may be influenced by encoding strategies specific for the type of upcoming task. We follow-up these experiments by examining test-expectancy processes in guessing and testing. Testing and guessing benefits over restudy were not found when test instructions were presented either after (Experiment 1) or before (Experiment 2) a single study/task cycle was completed, nor were benefits found when instructions were presented before study/task cycles and the task was repeated three times (Experiment 3). Testing and guessing benefits emerged only when instructions were presented before a study/task cycle and the task was repeated six times (Experiments 4A and 4B). These experiments demonstrate that initial testing and guessing can produce memory benefits in recognition, but only following substantial task repetitions which likely promote task-expectancy processes.
Partitioned-Interval Quantum Optical Communications Receiver
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2013-01-01
The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Objective analysis of pseudostress over the Indian Ocean using a direct-minimization approach
NASA Technical Reports Server (NTRS)
Legler, David M.; Navon, I. M.; O'Brien, James J.
1989-01-01
A technique not previously used in objective analysis of meteorological data is used here to produce monthly average surface pseudostress data over the Indian Ocean. An initial guess field is derived and a cost functional is constructed with five terms: approximation to initial guess, approximation to climatology, a smoothness parameter, and two kinematic terms. The functional is minimized using a conjugate-gradient technique, and the weight for the climatology term controls the overall balance of influence between the climatology and the initial guess. Results from various weight combinations are presented for January and July 1984. Quantitative and qualitative comparisons to the subject analysis are made to find which weight combination provides the best results. The weight on the approximation to climatology is found to balance the influence of the original field and climatology.
NASA Astrophysics Data System (ADS)
Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung
2017-03-01
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.
NASA Astrophysics Data System (ADS)
Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.
2017-12-01
Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.
Refinements of Stout’s Procedure for Assessing Latent Trait Unidimensionality
1992-08-01
in the presence of guessing when coupled with many high-discriminating items. A revision of DIMTEST is proposed to overcome this limitation. Also, an...used for factor analysis. When guessing is present in the responses to items, however, linear factor analysis of tetrachoric correlations can produce...significance when d=1 and maintaining good power when d=2, even when the correlation between the abilities is as high as .7. The present study provides a
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.
2006-03-01
Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.
Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints
NASA Astrophysics Data System (ADS)
Prince, E. R.; Carr, R. W.; Cobb, R. G.
This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.
NASA Astrophysics Data System (ADS)
Lilichenko, Mark; Kelley, Anne Myers
2001-04-01
A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.
Optimal thrust level for orbit insertion
NASA Astrophysics Data System (ADS)
Cerf, Max
2017-07-01
The minimum-fuel orbital transfer is analyzed in the case of a launcher upper stage using a constantly thrusting engine. The thrust level is assumed to be constant and its value is optimized together with the thrust direction. A closed-loop solution for the thrust direction is derived from the extremal analysis for a planar orbital transfer. The optimal control problem reduces to two unknowns, namely the thrust level and the final time. Guessing and propagating the costates is no longer necessary and the optimal trajectory is easily found from a rough initialization. On the other hand the initial costates are assessed analytically from the initial conditions and they can be used as initial guess for transfers at different thrust levels. The method is exemplified on a launcher upper stage targeting a geostationary transfer orbit.
Development of Scatterometer-Derived Surface Pressures
NASA Astrophysics Data System (ADS)
Hilburn, K. A.; Bourassa, M. A.; O'Brien, J. J.
2001-12-01
SeaWinds scatterometer-derived wind fields can be used to estimate surface pressure fields. The method to be used has been developed and tested with Seasat-A and NSCAT wind measurements. The method involves blending two dynamically consistent values of vorticity. Geostrophic relative vorticity is calculated from an initial guess surface pressure field (AVN analysis in this case). Relative vorticity is calculated from SeaWinds winds, adjusted to a geostrophic value, and then blended with the initial guess. An objective method applied minimizes the differences between the initial guess field and scatterometer field, subject to regularization. The long-term goal of this project is to derive research-quality pressure fields from the SeaWinds winds for the Southern Ocean from the Antarctic ice sheet to 30 deg S. The intermediate goal of this report involves generation of pressure fields over the northern hemisphere for testing purposes. Specifically, two issues need to be addressed. First, the most appropriate initial guess field will be determined: the pure AVN analysis or the previously assimilated pressure field. The independent comparison data to be used in answering this question will involve data near land, ship data, and ice data that were not included in the AVN analysis. Second, the smallest number of pressure observations required to anchor the assimilated field will be determined. This study will use Neumann (derivative) boundary conditions on the region of interest. Such boundary conditions only determine the solution to within a constant that must be determined by a number of anchoring points. The smallness of the number of anchoring points will demonstrate the viability of the general use of the scatterometer as a barometer over the oceans.
Herbranson, Walter T.; Schroeder, Julia
2011-01-01
The “Monty Hall Dilemma” (MHD) is a well known probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial choice is made, one of the remaining doors is opened, revealing no prize. The player is then given the option of staying with their initial guess or switching to the other unopened door. Most people opt to stay with their initial guess, despite the fact that switching doubles the probability of winning. A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy. Replication of the procedure with human participants showed that humans failed to adopt optimal strategies, even with extensive training. PMID:20175592
Herbranson, Walter T; Schroeder, Julia
2010-02-01
The "Monty Hall Dilemma" (MHD) is a well known probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial choice is made, one of the remaining doors is opened, revealing no prize. The player is then given the option of staying with their initial guess or switching to the other unopened door. Most people opt to stay with their initial guess, despite the fact that switching doubles the probability of winning. A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy. Replication of the procedure with human participants showed that humans failed to adopt optimal strategies, even with extensive training.
Direct Multiple Shooting Optimization with Variable Problem Parameters
NASA Technical Reports Server (NTRS)
Whitley, Ryan J.; Ocampo, Cesar A.
2009-01-01
Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.
Robust iterative method for nonlinear Helmholtz equation
NASA Astrophysics Data System (ADS)
Yuan, Lijun; Lu, Ya Yan
2017-08-01
A new iterative method is developed for solving the two-dimensional nonlinear Helmholtz equation which governs polarized light in media with the optical Kerr nonlinearity. In the strongly nonlinear regime, the nonlinear Helmholtz equation could have multiple solutions related to phenomena such as optical bistability and symmetry breaking. The new method exhibits a much more robust convergence behavior than existing iterative methods, such as frozen-nonlinearity iteration, Newton's method and damped Newton's method, and it can be used to find solutions when good initial guesses are unavailable. Numerical results are presented for the scattering of light by a nonlinear circular cylinder based on the exact nonlocal boundary condition and a pseudospectral method in the polar coordinate system.
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Search of exploration opportunity for near earth objects based on analytical gradients
NASA Astrophysics Data System (ADS)
Ren, Y.; Cui, P. Y.; Luan, E. J.
2008-01-01
The problem of searching for exploration opportunity of near Earth objects is investigated. For rendezvous missions, the analytical gradients of performance index with respect to free parameters are derived by combining the calculus of variation with the theory of state-transition matrix. Then, some initial guesses are generated random in the search space, and the performance index is optimized with the guidance of analytical gradients from these initial guesses. This method not only keeps the property of global search in traditional method, but also avoids the blindness in the traditional exploration opportunity search; hence, the computing speed could be increased greatly. Furthermore, by using this method, the search precision could be controlled effectively.
1981-05-01
PROFESSIONAL PAPER 306 / May 1981 WHAT GOOD ARE WARFARE MODELS? Thomas E. Anger DTICS E LECTE ,JUN 2198 1 j CENTER FOR NAVAL ANALYSES 81 6 19 025 V...WHAT GOOD ARE WARFARE MODELS? Thomas E. /Anger J Accession For !ETIS GRA&I DTIC TAB thonnounceldŕ 5 By-C Availability Codes iAva il aand/or Di1st...least flows from a life-or-death incenLive to make good guesses when choosing weapons, forces, or strategies. It is not surprising, however, that
NASA Astrophysics Data System (ADS)
Arias, E.; Florez, E.; Pérez-Torres, J. F.
2017-06-01
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.
Arias, E; Florez, E; Pérez-Torres, J F
2017-06-28
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu 7 , Cu 9 , and Cu 11 as benchmark systems, and Cu 38 and Ni 9 as novel systems. New equilibrium structures for Cu 9 , Cu 11 , Cu 38 , and Ni 9 are reported.
van Meel, Catharina S; Oosterlaan, Jaap; Heslenfeld, Dirk J; Sergeant, Joseph A
2005-01-01
Neuroimaging studies on ADHD suggest abnormalities in brain regions associated with decision-making and reward processing such as the anterior cingulate cortex (ACC) and orbitofrontal cortex. Recently, event-related potential (ERP) studies demonstrated that the ACC is involved in processing feedback signals during guessing and gambling. The resulting negative deflection, the 'feedback-related negativity' (FRN) has been interpreted as reflecting an error in reward prediction. In the present study, ERPs elicited by positive and negative feedback were recorded in children with ADHD and normal controls during guessing. 'Correct' and 'incorrect' guesses resulted in respectively monetary gains and losses. The FRN amplitude to losses was more pronounced in the ADHD group than in normal controls. Positive and negative feedback differentially affected long latency components in the ERP waveforms of normal controls, but not ADHD children. These later deflections might be related to further emotional or strategic processing. The present findings suggest an enhanced sensitivity to unfavourable outcomes in children with ADHD, probably due to abnormalities in mesolimbic reward circuits. In addition, further processing, such as affective evaluation and the assessment of future consequences of the feedback signal seems to be altered in ADHD. These results may further help understanding the neural basis of decision-making deficits in ADHD.
A comparison of methods for estimating the weight of preterm infants.
Elser, A S; Vessey, J A
1995-09-01
Four methods of predicting a preterm infant's weight (upper mid-arm circumference, gestational age, tape measure nomogram, and guessing) were investigated to see which was the most accurate. The weights of 37 preterm neonates were initially guessed by an experienced clinician, then estimated by the other three approaches applied in a random order, and then confirmed through actual weighing. The correlations between the four estimated weights and the actual weights were .96, .84, .97, and .98, respectively. The tape measure nomogram method was the best overall approach for clinical use.
An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System
NASA Astrophysics Data System (ADS)
Hosseinisianaki, Saghar
2011-12-01
This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.
An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System
NASA Astrophysics Data System (ADS)
Hosseinisianaki, Saghar
This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.
Orbital Maneuvers for Spacecrafts Travelling to/from the Lagrangian Points
NASA Astrophysics Data System (ADS)
Bertachini, A.
The well-known Lagrangian points that appear in the planar restricted three-body problem (Szebehely, 1967) are very important for astronautical applications. They are five points of equilibrium in the equations of motion, what means that a particle located at one of those points with zero velocity will remain there indefinitely. The collinear points (L1, L2 and L3) are always unstable and the triangular points (L4 and L5) are stable in the present case studied (Sun-Earth system). They are all very good points to locate a space-station, since they require a small amount of V (and fuel), the control to be used for station-keeping. The triangular points are specially good for this purpose, since they are stable equilibrium points. In this paper, the planar restricted three-body problem is regularized (using Lemaître regularization) and combined with numerical integration and gradient methods to solve the two point boundary value problem (the Lambert's three-body problem). This combination is applied to the search of families of transfer orbits between the Lagrangian points and the Earth, in the Sun-Earth system, with the minimum possible cost of the control used. So, the final goal of this paper is to find the magnitude and direction of the two impulses to be applied in the spacecraft to complete the transfer: the first one when leaving/arriving at the Lagrangian point and the second one when arriving/living at the Earth. This paper is a continuation of two previous papers that studied transfers in the Earth-Moon system: Broucke (1979), that studied transfer orbits between the Lagrangian points and the Moon and Prado (1996), that studied transfer orbits between the Lagrangian points and the Earth. So, the equations of motion are: whereis the pseudo-potential given by: To solve the TPBVP in the regularized variables the following steps are used: i) Guess a initial velocity Vi, so together with the initial prescribed position ri the complete initial state is known; ii) Guess a final regularized time f and integrate the regularized equations of motion from 0 = 0 until f; iii) Check the final position rf obtained from the numerical integration with the prescribed final position and the final real time with the specified time of flight. If there is an agreement (difference less than a specified error allowed) the solution is found and the process can stop here. If there is no agreement, an increment in the initial guessed velocity Vi and in the guessed final regularized time is made and the process goes back to step i). The method used to find the increment in the guessed variables is the standard gradient method, as described in Press et. al., 1989. The routines available in this reference are also used in this research with minor modifications. After that this algorithm is implemented, the Lambert's three-body problem between the Earth and the Lagrangian points is solved for several values of the time of flight. Since the regularized system is used to solve this problem, there is no need to specify the final position of M3 as lying in an primary's parking orbit (to avoid the singularity). Then, to make a comparison with previous papers (Broucke, 1979 and Prado, 1996) the centre of the primary is used as the final position for M3. The results are organized in plots of the energy and the initial flight path angle (the control to be used) in the rotating frame against the time of flight. The definition of the angle is such that the zero is in the "x" axis, (pointing to the positive direction) and it increases in the counter-clock-wise sense. This problem, as well as the Lambert's original version, has two solutions for a given transfer time: one in the counter-clock-wise direction and one in the clock-wise direction in the inertial frame. In this paper, emphasis is given in finding the families with the smallest possible energy (and velocity), although many other families do exist. Broucke, R., (1979) Travelling Between the Lagrange Points and the Moon, Journal of Guidance and Control, Vol. 2, Prado, A.F.B.A., (1969) Travelling Between the Lagrangian Points and the Earth, Acta Astronautica, Vol. 39, No. 7, pp. Press, W. H.; B. P. Flannery; S. A. Teukolsky and W. T. Vetterling (1989), Numerical Recipes, Cambridge University Szebehely, V., (1967), Theory of Orbits, Academic Press, New York.
The Good Language Learner: Another Look.
ERIC Educational Resources Information Center
Reiss, Mary-Ann
1985-01-01
A study of the learning techniques and strategies of successful learners revealed these strategies: monitoring which often involves silent speaking, attending to form and meaning, guessing, practicing, motivation to communicate, and mnemonics. It also revealed a high tolerance for ambiguity in successful learners. (MSE)
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
A geometric initial guess for localized electronic orbitals in modular biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, P. G.; Fattebert, J. L.; Lau, E. Y.
Recent first-principles molecular dynamics algorithms using localized electronic orbitals have achieved O(N) complexity and controlled accuracy in simulating systems with finite band gaps. However, accurately deter- mining the centers of these localized orbitals during simulation setup may require O(N 3) operations, which is computationally infeasible for many biological systems. We present an O(N) approach for approximating orbital centers in proteins, DNA, and RNA which uses non-localized solutions for a set of fixed-size subproblems to create a set of geometric maps applicable to larger systems. This scalable approach, used as an initial guess in the O(N) first-principles molecular dynamics code MGmol,more » facilitates first-principles simulations in biological systems of sizes which were previously impossible.« less
On optimal soft-decision demodulation. [in digital communication system
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1976-01-01
A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
NASA Astrophysics Data System (ADS)
Hejri, Mohammad; Mokhtari, Hossein; Azizian, Mohammad Reza; Söder, Lennart
2016-04-01
Parameter extraction of the five-parameter single-diode model of solar cells and modules from experimental data is a challenging problem. These parameters are evaluated from a set of nonlinear equations that cannot be solved analytically. On the other hand, a numerical solution of such equations needs a suitable initial guess to converge to a solution. This paper presents a new set of approximate analytical solutions for the parameters of a five-parameter single-diode model of photovoltaic (PV) cells and modules. The proposed solutions provide a good initial point which guarantees numerical analysis convergence. The proposed technique needs only a few data from the PV current-voltage characteristics, i.e. open circuit voltage Voc, short circuit current Isc and maximum power point current and voltage Im; Vm making it a fast and low cost parameter determination technique. The accuracy of the presented theoretical I-V curves is verified by experimental data.
Mesoscale temperature and moisture fields from satellite infrared soundings
NASA Technical Reports Server (NTRS)
Hillger, D. W.; Vonderhaar, T. H.
1976-01-01
The combined use of radiosonde and satellite infrared soundings can provide mesoscale temperature and moisture fields at the time of satellite coverage. Radiance data from the vertical temperature profile radiometer on NOAA polar-orbiting satellites can be used along with a radiosonde sounding as an initial guess in an iterative retrieval algorithm. The mesoscale temperature and moisture fields at local 9 - 10 a.m., which are produced by retrieving temperature profiles at each scan spot for the BTPR (every 70 km), can be used for analysis or as a forecasting tool for subsequent weather events during the day. The advantage of better horizontal resolution of satellite soundings can be coupled with the radiosonde temperature and moisture profile both as a best initial guess profile and as a means of eliminating problems due to the limited vertical resolution of satellite soundings.
Identifying arbitrary parameter zonation using multiple level set functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan
In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less
Identifying arbitrary parameter zonation using multiple level set functions
Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan
2018-03-14
In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less
Identifying arbitrary parameter zonation using multiple level set functions
NASA Astrophysics Data System (ADS)
Lu, Zhiming; Vesselinov, Velimir V.; Lei, Hongzhuan
2018-07-01
In this paper, we extended the analytical level set method [1,2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity of the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.
The Pogo Principle: We Have Met the Enemy, and Guess Who It Is?
ERIC Educational Resources Information Center
Blaine, Robert
1994-01-01
Despite recent criticisms, U.S. society is getting a good value for its education dollar. High schools are beset by college influences on the curriculum; special education requirements; overemphasis on student activities; unreasonable international comparisons; the influences of TV, teenage employment, and pathological behaviors; and the…
Dynamic and static initialization of a mesoscale model using VAS satellite data. M.S. Thesis
NASA Technical Reports Server (NTRS)
Beauchamp, James G.
1985-01-01
Various combinations of temperature and moisture data from the VISSR Atmospheric Sounder (VAS), conventional radiosonde data, and National Meteorological Center (NMC) global analysis, were used in a successive-correction type of objective-analysis procedure to produce analyses for 1200 GMT. The NMC global analyses served as the first-guess field for all of the objective analysis procedures. The first-guess field was enhanced by radiosonde data alone, VAS data alone, both radiosonde and VAS data, or by neither data source. In addition, two objective analyses were used in a dynamic initialization: one included only radiosonde data and the other used both radiosonde and VAS data. The dependence of 12 hour forecast skill on data type and the methods by which the data were used in the analysis/initialization were then investigated. This was done by comparison of forecast and observed fields, of sea-level pressure, temperature, wind, moisture, and accumulated precipitation. The use of VAS data in the initial conditions had a slight positive impact upon forecast temperature and moisture but a negative impact upon forecast wind. This was true for both the static and dynamic initialization experiments. Precipitation forecasts from all of the model simulations were nearly the same.
Mixture Rasch model for guessing group identification
NASA Astrophysics Data System (ADS)
Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling
2013-04-01
Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.
The cause of outliers in electromagnetic pulse (EMP) locations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.
2014-10-02
We present methods to calculate the location of EMP pulses when observed by 5 or more satellites. Simulations show that, even with a good initial guess and fitting a location to all of the data, there are sometime outlier results whose locations are much worse than most cases. By comparing simulations using different ionospheric transfer functions (ITFs), it appears that the outliers are caused by not including the additional path length due to refraction rather than being caused by not including higher order terms in the Appleton-Hartree equation. We suggest ways that the outliers can be corrected. These correction methodsmore » require one to use an electron density profile along the line of sight from the event to the satellite rather than using the total electron content (TEC) to characterize the ionosphere.« less
Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji
2016-08-18
Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations.
An Investigation of the Impact of Guessing on Coefficient α and Reliability
2014-01-01
Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.
Rapid space trajectory generation using a Fourier series shape-based approach
NASA Astrophysics Data System (ADS)
Taheri, Ehsan
With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipments. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.
Restricted Closed Shell Hartree Fock Roothaan Matrix Method Applied to Helium Atom Using Mathematica
ERIC Educational Resources Information Center
Acosta, César R.; Tapia, J. Alejandro; Cab, César
2014-01-01
Slater type orbitals were used to construct the overlap and the Hamiltonian core matrices; we also found the values of the bi-electron repulsion integrals. The Hartree Fock Roothaan approximation process starts with setting an initial guess value for the elements of the density matrix; with these matrices we constructed the initial Fock matrix.…
Nowroozi, Amin; Shahlaei, Mohsen
2017-02-01
In this study, a computational pipeline was therefore devised to overcome homology modeling (HM) bottlenecks. The coupling of HM with molecular dynamics (MD) simulation is useful in that it tackles the sampling deficiency of dynamics simulations by providing good-quality initial guesses for the native structure. Indeed, HM also relaxes the severe requirement of force fields to explore the huge conformational space of protein structures. In this study, the interaction between the human bombesin receptor subtype-3 and MK-5046 was investigated integrating HM, molecular docking, and MD simulations. To improve conformational sampling in typical MD simulations of GPCRs, as in other biomolecules, multiple trajectories with different initial conditions can be employed rather than a single long trajectory. Multiple MD simulations of human bombesin receptor subtype-3 with different initial atomic velocities are applied to sample conformations in the vicinity of the structure generated by HM. The backbone atom conformational space distribution of replicates is analyzed employing principal components analysis. As a result, the averages of structural and dynamic properties over the twenty-one trajectories differ significantly from those obtained from individual trajectories.
The Exploration of the Relationship between Guessing and Latent Ability in IRT Models
ERIC Educational Resources Information Center
Gao, Song
2011-01-01
This study explored the relationship between successful guessing and latent ability in IRT models. A new IRT model was developed with a guessing function integrating probability of guessing an item correctly with the examinee's ability and the item parameters. The conventional 3PL IRT model was compared with the new 2PL-Guessing model on…
Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions
NASA Astrophysics Data System (ADS)
Moore, Peter K.
2007-06-01
In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.
NASA Astrophysics Data System (ADS)
Banegas, Frederic; Michelucci, Dominique; Roelens, Marc; Jaeger, Marc
1999-05-01
We present a robust method for automatically constructing an ellipsoidal skeleton (e-skeleton) from a set of 3D points taken from NMR or TDM images. To ensure steadiness and accuracy, all points of the objects are taken into account, including the inner ones, which is different from the existing techniques. This skeleton will be essentially useful for object characterization, for comparisons between various measurements and as a basis for deformable models. It also provides good initial guess for surface reconstruction algorithms. On output of the entire process, we obtain an analytical description of the chosen entity, semantically zoomable (local features only or reconstructed surfaces), with any level of detail (LOD) by discretization step control in voxel or polygon format. This capability allows us to handle objects at interactive frame rates once the e-skeleton is computed. Each e-skeleton is stored as a multiscale CSG implicit tree.
ERIC Educational Resources Information Center
Wu, C. K.; Wu, K. S.
Since the 17th century, Chinese lexicography has been dominated by a character classification system divided into 214 radical groups. The proposed initial three-stroke system would eliminate the need to select (or guess) the proper radical and count strokes. The aim of the system is to facilitate the use of dictionaries and provide the student…
Guessing versus Choosing an Upcoming Task
Kleinsorge, Thomas; Scheil, Juliane
2016-01-01
We compared the effects of guessing vs. choosing an upcoming task. In a task-switching paradigm with four tasks, two groups of participants were asked to either guess or choose which task will be presented next under otherwise identical conditions. The upcoming task corresponded to participants’ guesses or choices in 75 % of the trials. However, only participants in the Choosing condition were correctly informed about this, whereas participants in the Guessing condition were told that tasks were determined at random. In the Guessing condition, we replicated previous findings of a pronounced reduction of switch costs in case of incorrect guesses. This switch cost reduction was considerably less pronounced with denied choices in the Choosing condition. We suggest that in the Choosing condition, the signaling of prediction errors associated with denied choices is attenuated because a certain proportion of denied choices is consistent with the overall representation of the situation as conveyed by task instructions. In the Guessing condition, in contrast, the mismatch of guessed and actual task is resolved solely on the level of individual trials by strengthening the representation of the actual task. PMID:27047423
Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.
He, Xiaoqi; Zheng, Zizhao; Hu, Chao
2015-01-01
The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.
Low-Thrust Trajectory Optimization with Simplified SQP Algorithm
NASA Technical Reports Server (NTRS)
Parrish, Nathan L.; Scheeres, Daniel J.
2017-01-01
The problem of low-thrust trajectory optimization in highly perturbed dynamics is a stressing case for many optimization tools. Highly nonlinear dynamics and continuous thrust are each, separately, non-trivial problems in the field of optimal control, and when combined, the problem is even more difficult. This paper de-scribes a fast, robust method to design a trajectory in the CRTBP (circular restricted three body problem), beginning with no or very little knowledge of the system. The approach is inspired by the SQP (sequential quadratic programming) algorithm, in which a general nonlinear programming problem is solved via a sequence of quadratic problems. A few key simplifications make the algorithm presented fast and robust to initial guess: a quadratic cost function, neglecting the line search step when the solution is known to be far away, judicious use of end-point constraints, and mesh refinement on multiple shooting with fixed-step integration.In comparison to the traditional approach of plugging the problem into a “black-box” NLP solver, the methods shown converge even when given no knowledge of the solution at all. It was found that the only piece of information that the user needs to provide is a rough guess for the time of flight, as the transfer time guess will dictate which set of local solutions the algorithm could converge on. This robustness to initial guess is a compelling feature, as three-body orbit transfers are challenging to design with intuition alone. Of course, if a high-quality initial guess is available, the methods shown are still valid.We have shown that endpoints can be efficiently constrained to lie on 3-body repeating orbits, and that time of flight can be optimized as well. When optimizing the endpoints, we must make a trade between converging quickly on sub-optimal endpoints or converging more slowly on end-points that are arbitrarily close to optimal. It is easy for the mission design engineer to adjust this trade based on the problem at hand.The biggest limitation to the algorithm at this point is that multi-revolution transfers (greater than 2 revolutions) do not work nearly as well. This restriction comes in because the relationship between node 1 and node N becomes increasingly nonlinear as the angular distance grows. Trans-fers with more than about 1.5 complete revolutions generally require the line search to improve convergence. Future work includes: Comparison of this algorithm with other established tools; improvements to how multiple-revolution transfers are handled; parallelization of the Jacobian computation; in-creased efficiency for the line search; and optimization of many more trajectories between a variety of 3-body orbits.
A Secured Authentication Protocol for SIP Using Elliptic Curves Cryptography
NASA Astrophysics Data System (ADS)
Chen, Tien-Ho; Yeh, Hsiu-Lien; Liu, Pin-Chuan; Hsiang, Han-Chen; Shih, Wei-Kuan
Session initiation protocol (SIP) is a technology regularly performed in Internet Telephony, and Hyper Text Transport Protocol (HTTP) as digest authentication is one of the major methods for SIP authentication mechanism. In 2005, Yang et al. pointed out that HTTP could not resist server spoofing attack and off-line guessing attack and proposed a secret authentication with Diffie-Hellman concept. In 2009, Tsai proposed a nonce based authentication protocol for SIP. In this paper, we demonstrate that their protocol could not resist the password guessing attack and insider attack. Furthermore, we propose an ECC-based authentication mechanism to solve their issues and present security analysis of our protocol to show that ours is suitable for applications with higher security requirement.
Your challenge is to correctly identify the item and its location from the picture. Clue: It’s somewhere at the NCI campus at Frederick or Fort Detrick. Win a framed photograph of the Poster Puzzler and have your photo featured on the Poster website by e-mailing your guess, along with your name, e-mail address, and daytime phone number, to poster@mail.nih.gov. All entries must be received by Friday, February 12, 2016, and the winner will be drawn from all correct answers received by that date. Good luck and good hunting!
Parallel Guessing: A Strategy for High-Speed Computation
1984-09-19
for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or
2016-03-01
89 3.1.3 NLP Improvement...3.2.1.2 NLP Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 3.2.2 Multiple-burn Planar LEO to GEO Transfer...101 3.2.2.1 PSO Initial Guess Generation . . . . . . . . . . . . . . . . . . . . . 101 3.2.2.2 NLP Improvement
Optimizing velocities and transports for complex coastal regions and archipelagos
NASA Astrophysics Data System (ADS)
Haley, Patrick J.; Agarwal, Arpit; Lermusiaux, Pierre F. J.
2015-05-01
We derive and apply a methodology for the initialization of velocity and transport fields in complex multiply-connected regions with multiscale dynamics. The result is initial fields that are consistent with observations, complex geometry and dynamics, and that can simulate the evolution of ocean processes without large spurious initial transients. A class of constrained weighted least squares optimizations is defined to best fit first-guess velocities while satisfying the complex bathymetry, coastline and divergence strong constraints. A weak constraint towards the minimum inter-island transports that are in accord with the first-guess velocities provides important velocity corrections in complex archipelagos. In the optimization weights, the minimum distance and vertical area between pairs of coasts are computed using a Fast Marching Method. Additional information on velocity and transports are included as strong or weak constraints. We apply our methodology around the Hawaiian islands of Kauai/Niihau, in the Taiwan/Kuroshio region and in the Philippines Archipelago. Comparisons with other common initialization strategies, among hindcasts from these initial conditions (ICs), and with independent in situ observations show that our optimization corrects transports, satisfies boundary conditions and redirects currents. Differences between the hindcasts from these different ICs are found to grow for at least 2-3 weeks. When compared to independent in situ observations, simulations from our optimized ICs are shown to have the smallest errors.
Comment on 3PL IRT Adjustment for Guessing
ERIC Educational Resources Information Center
Chiu, Ting-Wei; Camilli, Gregory
2013-01-01
Guessing behavior is an issue discussed widely with regard to multiple choice tests. Its primary effect is on number-correct scores for examinees at lower levels of proficiency. This is a systematic error or bias, which increases observed test scores. Guessing also can inflate random error variance. Correction or adjustment for guessing formulas…
Children's Awareness of Their Own Certainty and Understanding of Deduction and Guessing
ERIC Educational Resources Information Center
Pillow, Bradford H.; Anderson, Katherine L.
2006-01-01
We conducted three studies that investigated first through third grade children's ability to identify and remember deductive inference or guessing as the source of a belief, to detect and retain the certainty of a belief generated through inference or guessing and to evaluate another observer's inferences and guesses. Immediately following a…
Design of Optimally Robust Control Systems.
1980-01-01
approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double
Paek, Insu
2015-01-01
The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test characteristics on four confidence interval (CI) procedures for coefficient alpha in terms of coverage rate (CR), length, and the degree of asymmetry of CI estimates. In addition, interval estimates of coefficient alpha when data follow the essentially tau-equivalent condition were investigated as a supplement to the case of dichotomous data with examinee guessing. For dichotomous data with guessing, the results did not reveal salient negative effects of guessing and its interactions with other test characteristics (sample size, test length, coefficient alpha levels) on CR and the degree of asymmetry, but the effect of guessing was salient as a main effect and an interaction effect with sample size on the length of the CI estimates, making longer CI estimates as guessing increases, especially when combined with a small sample size. Other important effects (e.g., CI procedures on CR) are also discussed. PMID:29795863
ERIC Educational Resources Information Center
Stanton, Christine Rogers; Sutton, Karl
2012-01-01
In two projects described in this article, the authors discuss the use of Photovoice and Elder Interviews to draw upon visual and spoken forms of community-based literacy, generate ideas for written projects, promote a connection to community and culture, and engage students in critical analysis of writing process. Both projects took place in…
NASA Technical Reports Server (NTRS)
Blyth, J D
1926-01-01
The most usual method of arriving at the maximum amount of spindling or hollowing out permissible in the case of any particular spar section is by trial and error, a process which is apt to become laborious in the absence of good guessing - or luck. The following tables have been got out with the object of making it possible to arrive with certainty at a suitable section at the first attempt.
ERIC Educational Resources Information Center
Friedman, Miriam; And Others
1987-01-01
Test performances of sophomore medical students on a pretest and final exam (under guessing and no-guessing instructions) were compared. Discouraging random guessing produced test information with improved test reliability and less distortion of item difficulty. More able examinees were less compliant than less able examinees. (Author/RH)
Acquiring Different Senses of the Verb "To Know."
ERIC Educational Resources Information Center
Richards, Meredith Martin; Brown, Melissa Leath
Children's understanding of the epistemological terms "know" and "guess" was investigated in two studies with four- to ten-year-old subjects. Two adult players guessed at the location of a ball hidden in one of two boxes. On each trial the child was asked questions about "knowing" and "guessing" both before and after the guessing took place.…
The Costs and Benefits of Testing and Guessing on Recognition Memory
Huff, Mark J.; Balota, David A.; Hutchison, Keith A.
2016-01-01
We examined whether two types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler tasks, particularly when study lists were semantically related. However, both retrieval practice and guessing also generally inflated false recognition for the non-presented critical words. These patterns were found when final recognition was completed during a short delay within the same experimental session (Experiment 1) and following a 24-hr delay (Experiment 2). In Experiment 3, task instructions were presented randomly after each list to determine whether retrieval-practice and guessing effects were influenced by task-expectancy processes. In contrast to Experiments 1 and 2, final recognition following retrieval practice and guessing was equivalent to restudy, suggesting that the observed retrieval-practice and guessing advantages were in part due to preparatory task-based processing during study. PMID:26950490
Pillow, Bradford H
2002-01-01
Two experiments investigated kindergarten through fourth-grade children's and adults' (N = 128) ability to (1) evaluate the certainty of deductive inferences, inductive inferences, and guesses; and (2) explain the origins of inferential knowledge. When judging their own cognitive state, children in first grade and older rated deductive inferences as more certain than guesses; but when judging another person's knowledge, children did not distinguish valid inferences from invalid inferences and guesses until fourth grade. By third grade, children differentiated their own deductive inferences from inductive inferences and guesses, but only adults both differentiated deductive inferences from inductive inferences and differentiated inductive inferences from guesses. Children's recognition of their own inferences may contribute to the development of knowledge about cognitive processes, scientific reasoning, and a constructivist epistemology.
Bruck, F A
2001-10-01
Lowell's poorly executed supervisor/employee interaction was a lose-lose proposition. If other employees feel that Sue is being treated unfairly, there will be negative repercussions throughout the system. Employees must have confidence that they will be treated in a fair and equal manner when they have problems on the job. If management is not consistent in handling these problems, employees will spend time second-guessing critical decisions, and patient care will suffer. Sue and Dunk did a good job with their clinical care of Maudie. With improved patient communication, Maudie might have understood the reasons for her treatment, and this complaint might never have been made.
Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda
2010-01-01
Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.
ERIC Educational Resources Information Center
Paek, Insu
2016-01-01
The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…
Sometimes "Newton's Method" Always "Cycles"
ERIC Educational Resources Information Center
Latulippe, Joe; Switkes, Jennifer
2012-01-01
Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…
2015-12-24
minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution
Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting
Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart
2015-02-14
Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less
NASA Technical Reports Server (NTRS)
Hillger, D. W.; Vonder Haar, T. H.
1977-01-01
The ability to provide mesoscale temperature and moisture fields from operational satellite infrared sounding radiances over the United States is explored. High-resolution sounding information for mesoscale analysis and forecasting is shown to be obtainable in mostly clear areas. An iterative retrieval algorithm applied to NOAA-VTPR radiances uses a mean radiosonde sounding as a best initial-guess profile. Temperature soundings are then retrieved at a horizontal resolution of about 70 km, as is an indication of the precipitable water content of the vertical sounding columns. Derived temperature values may be biased in general by the initial-guess sounding or in certain areas by the cloud correction technique, but the resulting relative temperature changes across the field when not contaminated by clouds will be useful for mesoscale forecasting and models. The derived moisture, affected only by high clouds, proves to be reliable to within 0.5 cm of precipitable water and contains valuable horizontal information. Present-day applications from polar-orbiting satellites as well as possibilities from upcoming temperature and moisture sounders on geostationary satellites are noted.
Statistical alignment: computational properties, homology testing and goodness-of-fit.
Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G
2000-09-08
The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.
Biometrics based authentication scheme for session initiation protocol.
Xie, Qi; Tang, Zhixiong
2016-01-01
Many two-factor challenge-response based session initiation protocol (SIP) has been proposed, but most of them are vulnerable to smart card stolen attacks and password guessing attacks. In this paper, we propose a novel three-factor SIP authentication scheme using biometrics, password and smart card, and utilize the pi calculus-based formal verification tool ProVerif to prove that the proposed protocol achieves security and authentication. Furthermore, our protocol is highly efficient when compared to other related protocols.
A fast, time-accurate unsteady full potential scheme
NASA Technical Reports Server (NTRS)
Shankar, V.; Ide, H.; Gorski, J.; Osher, S.
1985-01-01
The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.
A hybrid computational-experimental approach for automated crystal structure solution
NASA Astrophysics Data System (ADS)
Meredig, Bryce; Wolverton, C.
2013-02-01
Crystal structure solution from diffraction experiments is one of the most fundamental tasks in materials science, chemistry, physics and geology. Unfortunately, numerous factors render this process labour intensive and error prone. Experimental conditions, such as high pressure or structural metastability, often complicate characterization. Furthermore, many materials of great modern interest, such as batteries and hydrogen storage media, contain light elements such as Li and H that only weakly scatter X-rays. Finally, structural refinements generally require significant human input and intuition, as they rely on good initial guesses for the target structure. To address these many challenges, we demonstrate a new hybrid approach, first-principles-assisted structure solution (FPASS), which combines experimental diffraction data, statistical symmetry information and first-principles-based algorithmic optimization to automatically solve crystal structures. We demonstrate the broad utility of FPASS to clarify four important crystal structure debates: the hydrogen storage candidates MgNH and NH3BH3; Li2O2, relevant to Li-air batteries; and high-pressure silane, SiH4.
Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G
2014-09-05
A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.
Two-Stage Path Planning Approach for Designing Multiple Spacecraft Reconfiguration Maneuvers
NASA Technical Reports Server (NTRS)
Aoude, Georges S.; How, Jonathan P.; Garcia, Ian M.
2007-01-01
The paper presents a two-stage approach for designing optimal reconfiguration maneuvers for multiple spacecraft. These maneuvers involve well-coordinated and highly-coupled motions of the entire fleet of spacecraft while satisfying an arbitrary number of constraints. This problem is particularly difficult because of the nonlinearity of the attitude dynamics, the non-convexity of some of the constraints, and the coupling between the positions and attitudes of all spacecraft. As a result, the trajectory design must be solved as a single 6N DOF problem instead of N separate 6 DOF problems. The first stage of the solution approach quickly provides a feasible initial solution by solving a simplified version without differential constraints using a bi-directional Rapidly-exploring Random Tree (RRT) planner. A transition algorithm then augments this guess with feasible dynamics that are propagated from the beginning to the end of the trajectory. The resulting output is a feasible initial guess to the complete optimal control problem that is discretized in the second stage using a Gauss pseudospectral method (GPM) and solved using an off-the-shelf nonlinear solver. This paper also places emphasis on the importance of the initialization step in pseudospectral methods in order to decrease their computation times and enable the solution of a more complex class of problems. Several examples are presented and discussed.
Atmospheric Ascent Guidance for Rocket-Powered Launch Vehicles
NASA Technical Reports Server (NTRS)
Dukeman, Greg A.
2002-01-01
An advanced ascent guidance algorithm for rocket- powered launch vehicles is developed. This algorithm cyclically solves the calculus-of-variations two-point boundary-value problem starting at vertical rise completion through main engine cutoff. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode until high dynamic pressure (including the critical max-Q) portion of the trajectory is over, at which time guidance operates under the assumption of negligible aerodynamic acceleration (i.e., vacuum dynamics). The initial costate guess is corrected based on errors in the terminal state constraints and the transversality conditions. Judicious approximations are made to reduce the order and complexity of the state/costate system. Results comparing guided launch vehicle trajectories with POST open-loop trajectories are given verifying the basic formulation of the algorithm. Multiple shooting is shown to be a very effective numerical technique for this application. In particular, just one intermediate shooting point, in addition to the initial shooting point, is sufficient to significantly reduce sensitivity to the guessed initial costates. Simulation results from a high-fidelity trajectory simulation are given for the case of launch to sub-orbital cutoff conditions as well as launch to orbit conditions. An abort to downrange landing site formulation of the algorithm is presented.
Age-related differences in guessing on free and forced recall tests.
Huff, Mark J; Meade, Michelle L; Hutchison, Keith A
2011-05-01
This study examined possible age-related differences in recall, guessing, and metacognition on free recall tests and forced recall tests. Participants studied categorised and unrelated word lists and were asked to recall the items under one of the following test conditions: standard free recall, free recall with a penalty for guessing, free recall with no penalty for guessing, or forced recall. The results demonstrated interesting age differences regarding the impact of liberal test instructions (i.e., forced recall and no penalty) relative to more conservative test instructions (i.e., standard free recall and penalty) on memory performance. Specifically, once guessing was controlled, younger adults' recall of categorised lists varied in accordance with test instructions while older adults' recall of categorised lists did not differ between conservative and liberal test instructions, presumably because older adults approach standard free recall tests of categorised lists with a greater propensity towards guessing than young adults.
Development of a New Critical Thinking Test Using Item Response Theory
ERIC Educational Resources Information Center
Wagner, Teresa A.; Harvey, Robert J.
2006-01-01
The authors describe the initial development of the Wagner Assessment Test (WAT), an instrument designed to assess critical thinking, using the 5-faceted view popularized by the Watson-Glaser Critical Thinking Appraisal (WGCTA; G. B. Watson & E. M. Glaser, 1980). The WAT was designed to reduce the degree of successful guessing relative to the…
Development of a High Efficiency Compressor/Expander for an Air Cycle Air Conditioning System.
1982-11-15
bearing, lb PHUB - Hub pressure (initial guess), psia RLG - Rotor length 1 ’B-2 RPM - Rotational speed, RPM R - Gas constant, lb -ft/lb - R CP - Specific...Compressor discharge port pressure ratio (PCD/PC2).:- CDP - Compressor pressure change, PCD-PCl PHUB - Pressure in compressor hub (acting on base of vanes
Objective Interpolation of Scatterometer Winds
NASA Technical Reports Server (NTRS)
Tang, Wenquing; Liu, W. Timothy
1996-01-01
Global wind fields are produced by successive corrections that use measurements by the European Remote Sensing Satellite (ERS-1) scatterometer. The methodology is described. The wind fields at 10-meter height provided by the European Center for Medium-Range Weather Forecasting (ECMWF) are used to initialize the interpolation process. The interpolated wind field product ERSI is evaluated in terms of its improvement over the initial guess field (ECMWF) and the bin-averaged ERS-1 wind field (ERSB). Spatial and temporal differences between ERSI, ECMWF and ERSB are presented and discussed.
NASA Astrophysics Data System (ADS)
Dauenhauer, Eric C.; Majdalani, Joseph
2003-06-01
This article describes a self-similarity solution of the Navier-Stokes equations for a laminar, incompressible, and time-dependent flow that develops within a channel possessing permeable, moving walls. The case considered here pertains to a channel that exhibits either injection or suction across two opposing porous walls while undergoing uniform expansion or contraction. Instances of direct application include the modeling of pulsating diaphragms, sweat cooling or heating, isotope separation, filtration, paper manufacturing, irrigation, and the grain regression during solid propellant combustion. To start, the stream function and the vorticity equation are used in concert to yield a partial differential equation that lends itself to a similarity transformation. Following this similarity transformation, the original problem is reduced to solving a fourth-order differential equation in one similarity variable η that combines both space and time dimensions. Since two of the four auxiliary conditions are of the boundary value type, a numerical solution becomes dependent upon two initial guesses. In order to achieve convergence, the governing equation is first transformed into a function of three variables: The two guesses and η. At the outset, a suitable numerical algorithm is applied by solving the resulting set of twelve first-order ordinary differential equations with two unspecified start-up conditions. In seeking the two unknown initial guesses, the rapidly converging inverse Jacobian method is applied in an iterative fashion. Numerical results are later used to ascertain a deeper understanding of the flow character. The numerical scheme enables us to extend the solution range to physical settings not considered in previous studies. Moreover, the numerical approach broadens the scope to cover both suction and injection cases occurring with simultaneous wall motion.
Psychosocial and Behavioral Factors Associated with Bowel and Bladder Management after SCI
2014-10-01
disability which will probably turn into long-term disability before long….I turned 61 so I could just go on Social Security as disabled which I guess for...enjoy life, apathy, emotionally withdrawal , isolated/lonely(regardless of actual social environment). 2. Anger: frustration, resentment, hostility, fury... mental , or social ). refers to the individual’s response to the “goodness of fit” between expectations and achievements, as experienced by the
Agency affects adults', but not children's, guessing preferences in a game of chance.
Harris, Adam J L; Rowley, Martin G; Beck, Sarah R; Robinson, Elizabeth J; McColgan, Kerry L
2011-09-01
Adults and children have recently been shown to prefer guessing the outcome of a die roll after the die has been rolled (but remained out of sight) rather than before it has been rolled. This result is contrary to the predictions of the competence hypothesis (Heath & Tversky, 1991 ), which proposes that people are sensitive to the degree of their relative ignorance and therefore prefer to guess about an outcome it is impossible to know, rather than one that they could know, but do not. We investigated the potential role of agency in guessing preferences about a novel game of chance. When the experimenter controlled the outcome, we replicated the finding that adults and 5- to 6-year-old children preferred to make their guess after the outcome had been determined. For adults only, this preference reversed when they exerted control over the outcome about which they were guessing. The adult data appear best explained by a modified version of the competence hypothesis that highlights the notion of control or responsibility. It is proposed that potential attributions of blame are related to the guesser's role in determining the outcome. The child data were consistent with an imagination-based account of guessing preferences.
Learning relevant features of data with multi-scale tensor networks
NASA Astrophysics Data System (ADS)
Miles Stoudenmire, E.
2018-07-01
Inspired by coarse-graining approaches used in physics, we show how similar algorithms can be adapted for data. The resulting algorithms are based on layered tree tensor networks and scale linearly with both the dimension of the input and the training set size. Computing most of the layers with an unsupervised algorithm, then optimizing just the top layer for supervised classification of the MNIST and fashion MNIST data sets gives very good results. We also discuss mixing a prior guess for supervised weights together with an unsupervised representation of the data, yielding a smaller number of features nevertheless able to give good performance.
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-10-01
We extend the recent bounds of Sason and Verdú relating Rényi entropy and Bayesian hypothesis testing (arXiv:1701.01974.) to the quantum domain and show that they have a number of different applications. First, we obtain a sharper bound relating the optimal probability of correctly distinguishing elements of an ensemble of states to that of the pretty good measurement, and an analogous bound for optimal and pretty good entanglement recovery. Second, we obtain bounds relating optimal guessing and entanglement recovery to the fidelity of the state with a product state, which then leads to tight tripartite uncertainty and monogamy relations.
Exploring the perceptual biases associated with believing and disbelieving in paranormal phenomena.
Simmonds-Moore, Christine
2014-08-01
Ninety-five participants (32 believers, 30 disbelievers and 33 neutral believers in the paranormal) participated in an experiment comprising one visual and one auditory block of trials. Each block included one ESP, two degraded stimuli and one random trial. Each trial included 8 screens or epochs of "random" noise. Participants entered a guess if they perceived a stimulus or changed their mind about stimulus identity, rated guesses for confidence and made notes during each trial. Believers and disbelievers did not differ in the number of guesses made, or in their ability to detect degraded stimuli. Believers displayed a trend toward making faster guesses for some conditions and significantly higher confidence and more misidentifications concerning guesses than disbelievers. Guesses, misidentifications and faster response latencies were generally more likely in the visual than auditory conditions. ESP performance was no different from chance. ESP performance did not differ between belief groups or sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.
Bayen, Ute J.; Kuhlmann, Beatrice G.
2010-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source guessing probabilities to the perceived contingency between sources and item types. When they do not have a representation of a contingency, they base their guesses on prior schematic knowledge. The authors provide support for this account in two experiments with sources presenting information that was expected for one source and somewhat unexpected for another. Schema-relevant information about the sources was provided at the time of encoding. When contingency perception was impeded by dividing attention, participants showed schema-based guessing (Experiment 1). Manipulating source - item contingency also affected guessing (Experiment 2). When this contingency was schema-inconsistent, it superseded schema-based expectations and led to schema-inconsistent guessing. PMID:21603251
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
2010-01-01
Background The intuitive early diagnostic guess could play an important role in reaching a final diagnosis. However, no study to date has attempted to quantify the importance of general practitioners' (GPs) ability to correctly appraise the origin of chest pain within the first minutes of an encounter. Methods The validation study was nested in a multicentre cohort study with a one year follow-up and included 626 successive patients who presented with chest pain and were attended by 58 GPs in Western Switzerland. The early diagnostic guess was assessed prior to a patient's history being taken by a GP and was then compared to a diagnosis of chest pain observed over the next year. Results Using summary measures clustered at the GP's level, the early diagnostic guess was confirmed by further investigation in 51.0% (CI 95%; 49.4% to 52.5%) of patients presenting with chest pain. The early diagnostic guess was more accurate in patients with a life threatening illness (65.4%; CI 95% 64.5% to 66.3%) and in patients who did not feel anxious (62.9%; CI 95% 62.5% to 63.3%). The predictive abilities of an early diagnostic guess were consistent among GPs. Conclusions The GPs early diagnostic guess was correct in one out of two patients presenting with chest pain. The probability of a correct guess was higher in patients with a life-threatening illness and in patients not feeling anxious about their pain. PMID:20170544
Verdon, François; Junod, Michel; Herzig, Lilli; Vaucher, Paul; Burnand, Bernard; Bischoff, Thomas; Pécoud, Alain; Favrat, Bernard
2010-02-21
The intuitive early diagnostic guess could play an important role in reaching a final diagnosis. However, no study to date has attempted to quantify the importance of general practitioners' (GPs) ability to correctly appraise the origin of chest pain within the first minutes of an encounter. The validation study was nested in a multicentre cohort study with a one year follow-up and included 626 successive patients who presented with chest pain and were attended by 58 GPs in Western Switzerland. The early diagnostic guess was assessed prior to a patient's history being taken by a GP and was then compared to a diagnosis of chest pain observed over the next year. Using summary measures clustered at the GP's level, the early diagnostic guess was confirmed by further investigation in 51.0% (CI 95%; 49.4% to 52.5%) of patients presenting with chest pain. The early diagnostic guess was more accurate in patients with a life threatening illness (65.4%; CI 95% 64.5% to 66.3%) and in patients who did not feel anxious (62.9%; CI 95% 62.5% to 63.3%). The predictive abilities of an early diagnostic guess were consistent among GPs. The GPs early diagnostic guess was correct in one out of two patients presenting with chest pain. The probability of a correct guess was higher in patients with a life-threatening illness and in patients not feeling anxious about their pain.
No-signaling quantum key distribution: solution by linear programming
NASA Astrophysics Data System (ADS)
Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan
2015-02-01
We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.
Thermal Vegetation Canopy Model Studies.
1981-08-01
optical and thermal canopy radiation models, and the interpretation of these measurements. Previous technical re- ports in this series have described...The initial guess is taken to be air temperature; thus, the solution approach may be interpreted as determining the modification to the air...provided assistance for interpreting the micrometeorological data. In addition, Dr. L. W. Gay of the School of Renewable Natural Resources, Arizona
Iterative Methods to Solve Linear RF Fields in Hot Plasma
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2014-10-01
Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Kim, Kyung-Ha; Park, Chandeok; Park, Sang-Young
2015-12-01
This work presents fuel-optimal altitude maintenance of Low-Earth-Orbit (LEO) spacecrafts experiencing non-negligible air drag and J2 perturbation. A pseudospectral (direct) method is first applied to roughly estimate an optimal fuel consumption strategy, which is employed as an initial guess to precisely determine itself. Based on the physical specifications of KOrea Multi-Purpose SATellite-2 (KOMPSAT-2), a Korean artificial satellite, numerical simulations show that a satellite ascends with full thrust at the early stage of the maneuver period and then descends with null thrust. While the thrust profile is presumably bang-off, it is difficult to precisely determine the switching time by using a pseudospectral method only. This is expected, since the optimal switching epoch does not coincide with one of the collocation points prescribed by the pseudospectral method, in general. As an attempt to precisely determine the switching time and the associated optimal thrust history, a shooting (indirect) method is then employed with the initial guess being obtained through the pseudospectral method. This hybrid process allows the determination of the optimal fuel consumption for LEO spacecrafts and their thrust profiles efficiently and precisely.
Global convergence of inexact Newton methods for transonic flow
NASA Technical Reports Server (NTRS)
Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.
1990-01-01
In computational fluid dynamics, nonlinear differential equations are essential to represent important effects such as shock waves in transonic flow. Discretized versions of these nonlinear equations are solved using iterative methods. In this paper an inexact Newton method using the GMRES algorithm of Saad and Schultz is examined in the context of the full potential equation of aerodynamics. In this setting, reliable and efficient convergence of Newton methods is difficult to achieve. A poor initial solution guess often leads to divergence or very slow convergence. This paper examines several possible solutions to these problems, including a standard local damping strategy for Newton's method and two continuation methods, one of which utilizes interpolation from a coarse grid solution to obtain the initial guess on a finer grid. It is shown that the continuation methods can be used to augment the local damping strategy to achieve convergence for difficult transonic flow problems. These include simple wings with shock waves as well as problems involving engine power effects. These latter cases are modeled using the assumption that each exhaust plume is isentropic but has a different total pressure and/or temperature than the freestream.
Pelet, S; Previte, M J R; Laiho, L H; So, P T C
2004-10-01
Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society
ERIC Educational Resources Information Center
Pillow, Bradford H.; Pearson, RaeAnne M.
2009-01-01
Adults' and kindergarten through fourth-grade children's evaluations and explanations of inductive inferences, deductive inferences, and guesses were assessed. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Beginning in third grade, deductions were rated as more certain than strong…
The Ranschburg Effect: Tests of the Guessing-Bias and Proactive Interference Hypotheses
ERIC Educational Resources Information Center
Walsh, Michael F.; Schwartz, Marian
1977-01-01
The guessing-bias and proactive interference hypotheses of the Ranschburg Effect were investigated by giving three groups different instructions as to guessing during recall. Results failed to support the prediction that the effect should be reduced or eliminated on shift trials. Neither hypothesis received significant support. (CHK)
Generically Used Expert Scheduling System (GUESS): User's Guide Version 1.0
NASA Technical Reports Server (NTRS)
Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira
1996-01-01
This user's guide contains instructions explaining how to best operate the program GUESS, a generic expert scheduling system. GUESS incorporates several important features for a generic scheduler, including automatic scheduling routines to generate a 'first' schedule for the user, a user interface that includes Gantt charts and enables the human scheduler to manipulate schedules manually, diagnostic report generators, and a variety of scheduling techniques. The current version of GUESS runs on an IBM PC or compatible in the Windows 3.1 or Windows '95 environment.
NASA Astrophysics Data System (ADS)
Joslin, R. D.
1991-04-01
The use of passive devices to obtain drag and noise reduction or transition delays in boundary layers is highly desirable. One such device that shows promise for hydrodynamic applications is the compliant coating. The present study extends the mechanical model to allow for three-dimensional waves. This study also looks at the effect of compliant walls on three-dimensional secondary instabilities. For the primary and secondary instability analysis, spectral and shooting approximations are used to obtain solutions of the governing equations and boundary conditions. The spectral approximation consists of local and global methods of solution while the shooting approach is local. The global method is used to determine the discrete spectrum of eigenvalue without any initial guess. The local method requires a sufficiently accurate initial guess to converge to the eigenvalue. Eigenvectors may be obtained with either local approach. For the initial stage of this analysis, two and three dimensional primary instabilities propagate over compliant coatings. Results over the compliant walls are compared with the rigid wall case. Three-dimensional instabilities are found to dominate transition over the compliant walls considered. However, transition delays are still obtained and compared with transition delay predictions for rigid walls. The angles of wave propagation are plotted with Reynolds number and frequency. Low frequency waves are found to be highly three-dimensional.
A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems
Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing
2012-01-01
An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849
The Costs and Benefits of Testing and Guessing on Recognition Memory
ERIC Educational Resources Information Center
Huff, Mark J.; Balota, David A.; Hutchison, Keith A.
2016-01-01
We examined whether 2 types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler…
ERIC Educational Resources Information Center
Pillow, Bradford H.; Pearson, RaeAnne M.; Hecht, Mary; Bremer, Amanda
2010-01-01
Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults…
Method for guessing the response of a physical system to an arbitrary input
Wolpert, David H.
1996-01-01
Stacked generalization is used to minimize the generalization errors of one or more generalizers acting on a known set of input values and output values representing a physical manifestation and a transformation of that manifestation, e.g., hand-written characters to ASCII characters, spoken speech to computer command, etc. Stacked generalization acts to deduce the biases of the generalizer(s) with respect to a known learning set and then correct for those biases. This deduction proceeds by generalizing in a second space whose inputs are the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is the correct guess. Stacked generalization can be used to combine multiple generalizers or to provide a correction to a guess from a single generalizer.
Initial values for the integration scheme to compute the eigenvalues for propagation in ducts
NASA Technical Reports Server (NTRS)
Eversman, W.
1977-01-01
A scheme for the calculation of eigenvalues in the problem of acoustic propagation in a two-dimensional duct is described. The computation method involves changing the coupled transcendental nonlinear algebraic equations into an initial value problem involving a nonlinear ordinary differential equation. The simplest approach is to use as initial values the hardwall eigenvalues and to integrate away from these values as the admittance varies from zero to its actual value with a linear variation. The approach leads to a powerful root finding routine capable of computing the transverse and axial wave numbers for two-dimensional ducts for any frequency, lining, admittance and Mach number without requiring initial guesses or starting points.
The dynamics of fidelity over the time course of long-term memory.
Persaud, Kimele; Hemmer, Pernille
2016-08-01
Bayesian models of cognition assume that prior knowledge about the world influences judgments. Recent approaches have suggested that the loss of fidelity from working to long-term (LT) memory is simply due to an increased rate of guessing (e.g. Brady, Konkle, Gill, Oliva, & Alvarez, 2013). That is, recall is the result of either remembering (with some noise) or guessing. This stands in contrast to Bayesian models of cognition while assume that prior knowledge about the world influences judgments, and that recall is a combination of expectations learned from the environment and noisy memory representations. Here, we evaluate the time course of fidelity in LT episodic memory, and the relative contribution of prior category knowledge and guessing, using a continuous recall paradigm. At an aggregate level, performance reflects a high rate of guessing. However, when aggregate data is partitioned by lag (i.e., the number of presentations from study to test), or is un-aggregated, performance appears to be more complex than just remembering with some noise and guessing. We implemented three models: the standard remember-guess model, a three-component remember-guess model, and a Bayesian mixture model and evaluated these models against the data. The results emphasize the importance of taking into account the influence of prior category knowledge on memory. Copyright © 2016 Elsevier Inc. All rights reserved.
Correction for Guessing in the Framework of the 3PL Item Response Theory
ERIC Educational Resources Information Center
Chiu, Ting-Wei
2010-01-01
Guessing behavior is an important topic with regard to assessing proficiency on multiple choice tests, particularly for examinees at lower levels of proficiency due to greater the potential for systematic error or bias which that inflates observed test scores. Methods that incorporate a correction for guessing on high-stakes tests generally rely…
Children's Understanding of the Words "Know" and "Guess."
ERIC Educational Resources Information Center
Miscione, John L.; And Others
This study investigated preschool children's understanding of the words "know" and "guess." Subjects for the study were 48 male and female preschool children ranging in age from 3.6 to 6.6 years. The children were divided into three age groups representing one year intervals. The task for the study involved a "guessing" game in which a colored…
Children's Evaluation of the Certainty of Another Person's Inductive Inferences and Guesses
ERIC Educational Resources Information Center
Pillow, Bradford H.; Pearson, RaeAnne M.
2012-01-01
In three studies, 5-10-year-old children and an adult comparison group judged another's certainty in making inductive inferences and guesses. Participants observed a puppet make strong inductions, weak inductions, and guesses. Participants either had no information about the correctness of the puppet's conclusion, knew that the puppet was correct,…
IRT Models for Ability-Based Guessing
ERIC Educational Resources Information Center
Martin, Ernesto San; del Pino, Guido; De Boeck, Paul
2006-01-01
An ability-based guessing model is formulated and applied to several data sets regarding educational tests in language and in mathematics. The formulation of the model is such that the probability of a correct guess does not only depend on the item but also on the ability of the individual, weighted with a general discrimination parameter. By so…
Generalized gradient algorithm for trajectory optimization
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Bryson, A. E.; Slattery, R.
1990-01-01
The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.
Pillow, B H; Hill, V; Boyce, A; Stein, C
2000-03-01
Three experiments investigated children's understanding of inference as a source of knowledge. Children observed a puppet make a statement about the color of one of two hidden toys after the puppet (a) looked directly at the toy (looking), (b) looked at the other toy (inference), or (c) looked at neither toy (guessing). Most 4-, 5-, and 6-year-olds did not rate the puppet as being more certain of the toy's color after the puppet looked directly at it or inferred its color than they did after the puppet guessed its color. Most 8 and 9-year-olds distinguished inference and looking from guessing. The tendency to explain the puppet's knowledge by referring to inference increased with age. Children who referred to inference in their explanations were more likely to judge deductive inference as more certain than guessing.
ERIC Educational Resources Information Center
Wang, Wen-Chung; Huang, Sheng-Yun
2011-01-01
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Getting Lucky: How Guessing Threatens the Validity of Performance Classifications
ERIC Educational Resources Information Center
Foley, Brett P.
2016-01-01
There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…
An age-related attentuation of selectivity of choice in a modified guessing task.
Sanford, A J; Jack, E; Maule, A J
1977-01-01
Previous research has shown that older Ss tend to be less selective in multi-source monitoring tasks in that they do not observe the more likely source of information as frequently as do the young. On the other hand, it has also been found that in a simple guessing-game or probability matching task older Ss are no different in their patterns of prediction. An experiment is described below in which old and young Ss take part in a simple quessing-game task where uncertainty as to the success of a guess is made artificially high by the introduction of a proportion of trials on which the stimulus event occurring could not be guessed. Under these conditions old Ss were less selective in their responses. It is suggested that the results support a view that older Ss are less selective at high levels of uncertainty in the likelihood of a guess being the correct one, and that the result is consistent with both types of earlier results, goes part-way towards clarifying the differences, and provides a further example of a situation in which attenuated guessing-selectivity is associated with age.
Denoising forced-choice detection data.
García-Pérez, Miguel A
2010-02-01
Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.
ERIC Educational Resources Information Center
Campbell, Mark L.
2015-01-01
Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…
Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N
2016-07-12
We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
Negre, Christian F. A; Mniszewski, Susan M.; Cawkwell, Marc Jon; ...
2016-06-06
We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive iterative re nement of an initial guess Z of the inverse overlap matrix S. The initial guess of Z is obtained beforehand either by using an approximate divide and conquer technique or dynamically, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under incomplete approximate iterative re nement of Z. Linear scaling performance ismore » obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables e cient shared memory parallelization. As we show in this article using selfconsistent density functional based tight-binding MD, our approach is faster than conventional methods based on the direct diagonalization of the overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4,158 atom water-solvated polyalanine system we nd an average speedup factor of 122 for the computation of Z in each MD step.« less
Multiple steady states in atmospheric chemistry
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
1993-01-01
The equations describing the distributions and concentrations of trace species are nonlinear and may thus possess more than one solution. This paper develops methods for searching for multiple physical solutions to chemical continuity equations and applies these to subsets of equations describing tropospheric chemistry. The calculations are carried out with a box model and use two basic strategies. The first strategy is a 'search' method. This involves fixing model parameters at specified values, choosing a wide range of initial guesses at a solution, and using a Newton-Raphson technique to determine if different initial points converge to different solutions. The second strategy involves a set of techniques known as homotopy methods. These do not require an initial guess, are globally convergent, and are guaranteed, in principle, to find all solutions of the continuity equations. The first method is efficient but essentially 'hit or miss' in the sense that it cannot guarantee that all solutions which may exist will be found. The second method is computationally burdensome but can, in principle, determine all the solutions of a photochemical system. Multiple solutions have been found for models that contain a basic complement of photochemical reactions involving O(x), HO(x), NO(x), and CH4. In the present calculations, transitions occur between stable branches of a multiple solution set as a control parameter is varied. These transitions are manifestations of hysteresis phenomena in the photochemical system and may be triggered by increasing the NO flux or decreasing the CH4 flux from current mean tropospheric levels.
NASA Astrophysics Data System (ADS)
Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.
2017-11-01
In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.
An Adaptive Buddy Check for Observational Quality Control
NASA Technical Reports Server (NTRS)
Dee, Dick P.; Rukhovets, Leonid; Todling, Ricardo; DaSilva, Arlindo M.; Larson, Jay W.; Einaudi, Franco (Technical Monitor)
2000-01-01
An adaptive buddy check algorithm is presented that adjusts tolerances for outlier observations based on the variability of surrounding data. The algorithm derives from a statistical hypothesis test combined with maximum-likelihood covariance estimation. Its stability is shown to depend on the initial identification of outliers by a simple background check. The adaptive feature ensures that the final quality control decisions are not very sensitive to prescribed statistics of first-guess and observation errors, nor on other approximations introduced into the algorithm. The implementation of the algorithm in a global atmospheric data assimilation is described. Its performance is contrasted with that of a non-adaptive buddy check, for the surface analysis of an extreme storm that took place in Europe on 27 December 1999. The adaptive algorithm allowed the inclusion of many important observations that differed greatly from the first guess and that would have been excluded on the basis of prescribed statistics. The analysis of the storm development was much improved as a result of these additional observations.
Setting and changing feature priorities in visual short-term memory.
Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin
2017-04-01
Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.
The ironic effect of guessing: increased false memory for mediated lists in younger and older adults
Coane, Jennifer H.; Huff, Mark J.; Hutchison, Keith A.
2016-01-01
Younger and older adults studied lists of words directly (e.g., creek, water) or indirectly (e.g., beaver, faucet) related to a nonpresented critical lure (CL; e.g., river). Indirect (i.e., mediated) lists presented items that were only related to CLs through nonpresented mediators (i.e., directly related items). Following study, participants completed a condition-specific task, math, a recall test with or without a warning about the CL, or tried to guess the CL. On a final recognition test, warnings (vs. math and recall without warning) decreased false recognition for direct lists, and guessing increased mediated false recognition (an ironic effect of guessing) in both age groups. The observed age-invariance of the ironic effect of guessing suggests that processes involved in mediated false memory are preserved in aging and confirms the effect is largely due to activation in semantic networks during encoding and to the strengthening of these networks during the interpolated tasks. PMID:26393390
Coane, Jennifer H; Huff, Mark J; Hutchison, Keith A
2016-01-01
Younger and older adults studied lists of words directly (e.g., creek, water) or indirectly (e.g., beaver, faucet) related to a nonpresented critical lure (CL; e.g., river). Indirect (i.e., mediated) lists presented items that were only related to CLs through nonpresented mediators (i.e., directly related items). Following study, participants completed a condition-specific task, math, a recall test with or without a warning about the CL, or tried to guess the CL. On a final recognition test, warnings (vs. math and recall without warning) decreased false recognition for direct lists, and guessing increased mediated false recognition (an ironic effect of guessing) in both age groups. The observed age-invariance of the ironic effect of guessing suggests that processes involved in mediated false memory are preserved in aging and confirms the effect is largely due to activation in semantic networks during encoding and to the strengthening of these networks during the interpolated tasks.
Misinformation, partial knowledge and guessing in true/false tests.
Burton, Richard F
2002-09-01
Examiners disagree on whether or not multiple choice and true/false tests should be negatively marked. Much of the debate has been clouded by neglect of the role of misinformation and by vagueness regarding both the specification of test types and "partial knowledge" in relation to guessing. Moreover, variations in risk-taking in the face of negative marking have too often been treated in absolute terms rather than in relation to the effect of guessing on test unreliability. This paper aims to clarify these points and to compare the ill-effects on test reliability of guessing and of variable risk-taking. Three published studies on medical students are examined. These compare responses in true/false tests obtained with both negative marking and number-right scoring. The studies yield data on misinformation and on the extent to which students may fail to benefit from distrusted partial knowledge when there is negative marking. A simple statistical model is used to compare variations in risk-taking with test unreliability due to blind guessing under number-right scoring conditions. Partial knowledge should be least problematic with independent true/false items. The effect on test reliability of blind guessing under number-right conditions is generally greater than that due to the over-cautiousness of some students when there is negative marking.
The 60 Minute Network Security Guide (First Steps Towards a Secure Network Environment)
2001-10-16
default/ passwd file in UNIX. Administrators should obtain and run password-guessing programs (i.e., “John the Ripper,’’ “L0phtCrack,” and “Crack...system on which it is running, it is a good idea to transfer the encrypted passwords (the dumped SAM database for Windows and the /etc/ passwd and /etc...ownership by root and group sys. The /etc/ passwd file should have permissions 644 with owner root and group root. n Be cracked every month to find
Short Course on Cardiopulmonary Aspects of Aerospace Medicine. Addendum
1989-05-01
following year. In 1986 he was again admitted to the hospital in shock, due to another pericardial effusion, now caused by a toxoplasmosis infection...After therapy, he got a full recovery . The problem is, what caused the first effusion? Was it traumatic or was it toxoplasmosis ? Until now, we...guessed that the first one was traumatic and the second one was toxoplasmosis . However, toxoplasmosis gives a lifelong immunity after the initial infection
Filtering observations without the initial guess
NASA Astrophysics Data System (ADS)
Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.
2017-12-01
Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.
Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor
Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong
2011-01-01
In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104
Towards understanding the guessing game: a dynamical systems’ perspective
NASA Astrophysics Data System (ADS)
Reimann, Stefan
2004-08-01
The so-called “Guessing Game” or α-Beauty Contest serves as a paradigmatic conceptual framework for competitive price formation on financial markets beyond traditional equilibrium finance. It highlights features that are reasonable to consider when dealing with price formation on real markets. Nonetheless this game is still poorly understood. We propose a model which is essentially based on two assumptions: (1) players consider intervals rather than exact numbers to cope with incomplete knowledge and (2) players iteratively update their recent guesses. It provides an explanation for typical patterns observed in real data, such as the strict positivity of outcomes in the 1-shot setting, the skew background distribution of guessed numbers, as well as the polynomial convergence towards the game-theoretic Nash equilibrium in the iterative setting.
Guess what? Here is a new tool that finds some new guessing attacks
2003-01-01
Std Z39-18 2 Ricardo Corin, Sreekanth Malladi , Jim Alves-Foss, and Sandro Etalle A type-flaw occurs when a message of one type is received by a...satisfying condition 1), but not before guessing (satisfying condition 2). 4 Ricardo Corin, Sreekanth Malladi , Jim Alves-Foss, and Sandro Etalle The only case...Feb 2003. 6 Ricardo Corin, Sreekanth Malladi , Jim Alves-Foss, and Sandro Etalle 4.1 Examples Example 4.1 Consider the following protocol: Msg 1. a
2017-05-25
Guessing Right for the Next War: Streamlining, Pooling, and Right-Timing Force Design Decisions for an Environment of Uncertainty A...JUN 2016 – MAY 2017 4. TITLE AND SUBTITLE Guessing Right for the Next War: Streamlining, Pooling, and Right- Timing Force Design Decisions for an...committing to one force design solution to modern combat. The Army after World War II shied away from temporary organizational systems like these in
NASA Astrophysics Data System (ADS)
Rabin, Sam; Alexander, Peter; Anthoni, Peter; Henry, Roslyn; Huntingford, Chris; Pugh, Thomas; Rounsevell, Mark; Arneth, Almut
2017-04-01
A major question facing humanity is how well agricultural production systems will be able to feed the world in a future of rapid climate change, population growth, and demand shifts—all while minimizing our impact on the natural world. Global modeling has frequently been used to investigate certain aspects of this question, but in order to properly address the challenge, no one part of the human-environmental system can be assessed in isolation. It is especially critical that the effect on agricultural yields of changing temperature and precipitation regimes (including seasonal timing and frequency and intensity of extreme events), as well as rising atmospheric carbon dioxide levels, be taken into account when planning for future food security. Coupled modeling efforts, where changes in various parts of the Earth system are allowed to feed back onto one another, represent a powerful strategy in this regard. This presentation describes the structure and initial results of an effort to couple a biologically-representative vegetation and crop production simulator, LPJ-GUESS, with the climate emulator IMOGEN and the land-use model PLUMv2. With IMOGEN providing detailed future weather simulations, LPJ-GUESS simulates natural vegetation as well as cropland and pasture/rangeland; the simulated exchange of greenhouse gases between the land and atmosphere feeds back into IMOGEN's predictions. LPJ-GUESS also produces potential vegetation yields for irrigated vs. rainfed crops under three levels of nitrogen fertilizer addition. PLUMv2 combines these potential yields with endogenous demand and agricultural commodity price to calculate an optimal set of land use distributions and management strategies across the world for the next five years of simulation, based on socio-economic scenario data. These land uses are then fed back into LPJ-GUESS, and the cycle of climate, greenhouse gas emissions, crop yields, and land-use change continues. The globally gridded nature of the model—at 0.5-degree resolution across the world—generates spatially explicit projections at a sub-national scale relevant to individual land managers. Here, we present the results of using the LPJ-GUESS-PLUM-IMOGEN coupled model to project agricultural production and management strategies under several scenarios of greenhouse gas emissions (the Representative Concentration Pathways) and socioeconomic futures (the Shared Socioeconomic Pathways) through the year 2100. In the future, the coupled model could be used to generate projections for alternative scenarios: for example, to consider the implications from land-based climate change mitigation policies, or changes to international trade tariffs regimes.
Trimming Line Design using New Development Method and One Step FEM
NASA Astrophysics Data System (ADS)
Chung, Wan-Jin; Park, Choon-Dal; Yang, Dong-yol
2005-08-01
In most of automobile panel manufacturing, trimming is generally performed prior to flanging. To find feasible trimming line is crucial in obtaining accurate edge profile after flanging. Section-based method develops blank along section planes and find trimming line by generating loop of end points. This method suffers from inaccurate results for regions with out-of-section motion. On the other hand, simulation-based method can produce more accurate trimming line by iterative strategy. However, due to limitation of time and lack of information in initial die design, it is still not widely accepted in the industry. In this study, new fast method to find feasible trimming line is proposed. One step FEM is used to analyze the flanging process because we can define the desired final shape after flanging and most of strain paths are simple in flanging. When we use one step FEM, the main obstacle is the generation of initial guess. Robust initial guess generation method is developed to handle bad-shaped mesh, very different mesh size and undercut part. The new method develops 3D triangular mesh in propagational way from final mesh onto the drawing tool surface. Also in order to remedy mesh distortion during development, energy minimization technique is utilized. Trimming line is extracted from the outer boundary after one step FEM simulation. This method shows many benefits since trimming line can be obtained in the early design stage. The developed method is successfully applied to the complex industrial applications such as flanging of fender and door outer.
A morphing-based scheme for large deformation analysis with stereo-DIC
NASA Astrophysics Data System (ADS)
Genovese, Katia; Sorgente, Donato
2018-05-01
A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wong, Jaime G.; Rosi, Giuseppe A.; Rouhi, Amirreza; Rival, David E.
2017-10-01
Particle tracking velocimetry (PTV) produces high-quality temporal information that is often neglected when computing spatial gradients. A method is presented here to utilize this temporal information in order to improve the estimation of spatial gradients for spatially unstructured Lagrangian data sets. Starting with an initial guess, this method penalizes any gradient estimate where the substantial derivative of vorticity along a pathline is not equal to the local vortex stretching/tilting. Furthermore, given an initial guess, this method can proceed on an individual pathline without any further reference to neighbouring pathlines. The equivalence of the substantial derivative and vortex stretching/tilting is based on the vorticity transport equation, where viscous diffusion is neglected. By minimizing the residual of the vorticity-transport equation, the proposed method is first tested to reduce error and noise on a synthetic Taylor-Green vortex field dissipating in time. Furthermore, when the proposed method is applied to high-density experimental data collected with `Shake-the-Box' PTV, noise within the spatial gradients is significantly reduced. In the particular test case investigated here of an accelerating circular plate captured during a single run, the method acts to delineate the shear layer and vortex core, as well as resolve the Kelvin-Helmholtz instabilities, which were previously unidentifiable without the use of ensemble averaging. The proposed method shows promise for improving PTV measurements that require robust spatial gradients while retaining the unstructured Lagrangian perspective.
The neural encoding of guesses in the human brain.
Bode, Stefan; Bogler, Carsten; Soon, Chun Siong; Haynes, John-Dylan
2012-01-16
Human perception depends heavily on the quality of sensory information. When objects are hard to see we often believe ourselves to be purely guessing. Here we investigated whether such guesses use brain networks involved in perceptual decision making or independent networks. We used a combination of fMRI and pattern classification to test how visibility affects the signals, which determine choices. We found that decisions regarding clearly visible objects are predicted by signals in sensory brain regions, whereas different regions in parietal cortex became predictive when subjects were shown invisible objects and believed themselves to be purely guessing. This parietal network was highly overlapping with regions, which have previously been shown to encode free decisions. Thus, the brain might use a dedicated network for determining choices when insufficient sensory information is available. Copyright © 2011 Elsevier Inc. All rights reserved.
Priming guesses on a forced-recall test.
Gibson, Janet M; Meade, Michelle L
2004-07-01
The forced-recall paradigm requires participants to fill all spaces on the memory test even if they cannot remember all the list words. In the present study, the authors used that paradigm to examine the influence of implicit memory on guessing--when participants fill remaining spaces after they cannot remember list items. They measured explicit memory as the percentage of targets that participants designated as remembered from the list and implicit memory as the percentage of targets they wrote but did not designate as remembered (beyond chance level). The authors examined implicit memory on guessing with forced recall (Experiment 1), forced cued recall with younger and older adults (Experiment 2), and forced free and cued recall under a depth-of-processing manipulation (Experiment 3). They conclude that implicit memory influences guesses of targets in the forced-recall paradigm.
Incorrect predictions reduce switch costs.
Kleinsorge, Thomas; Scheil, Juliane
2015-07-01
In three experiments, we combined two sources of conflict within a modified task-switching procedure. The first source of conflict was the one inherent in any task switching situation, namely the conflict between a task set activated by the recent performance of another task and the task set needed to perform the actually relevant task. The second source of conflict was induced by requiring participants to guess aspects of the upcoming task (Exps. 1 & 2: task identity; Exp. 3: position of task precue). In case of an incorrect guess, a conflict accrues between the representation of the guessed task and the actually relevant task. In Experiments 1 and 2, incorrect guesses led to an overall increase of reaction times and error rates, but they reduced task switch costs compared to conditions in which participants predicted the correct task. In Experiment 3, incorrect guesses resulted in faster performance overall and to a selective decrease of reaction times in task switch trials when the cue-target interval was long. We interpret these findings in terms of an enhanced level of controlled processing induced by a combination of two sources of conflict converging upon the same target of cognitive control. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Y.; Xie, D.; Yan, G.
Accurate knowledge of the potential energy surface (PES) and the spectroscopic properties of carbon dioxide plays an important role in understanding the greenhouse effect. The potential energy surface for the electronic ground state of CO{sub 2} is refined by means of a two-step variational procedure using the exact rovibrational Hamiltonian in the bond length-bond angle coordinates. In the refinement, the observed rovibrational energy levels for J = 0-4 below 16,000 cm {sup -1}, obtained from the effective spectroscopic constants of CO{sub 2} given by Rothman et al. (J Quant Spectrosc Radiat Transfer 1992, 48, 537) in HITRAN data base, aremore » used as the input data points. The accurate ab initio force constants of Martin et al. (Chem Phys Lett 1993, 205, 535) are taken as the initial guess for the potential. The root-mean-square error of this fit to the 431 observed rovibrational energy levels is 0.05 cm{sup {minus}1}. With the optimized potential energy surface, the authors also calculate the rovibrational energy levels of {sup 13}C{sup 16}O{sub 2} and {sup 12}C{sup 18}O{sub 2}. The results are in good agreement with experimental data.« less
A finite element based method for solution of optimal control problems
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.
1989-01-01
A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.
Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł
2007-04-21
A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.
NASA Astrophysics Data System (ADS)
Mazrou, H.; Bezoubiri, F.
2018-07-01
In this work, a new program developed under MATLAB environment and supported by the Bayesian software WinBUGS has been combined to the traditional unfolding codes namely MAXED and GRAVEL, to evaluate a neutron spectrum from the Bonner spheres measured counts obtained around a shielded 241AmBe based-neutron irradiator located at a Secondary Standards Dosimetry Laboratory (SSDL) at CRNA. In the first step, the results obtained by the standalone Bayesian program, using a parametric neutron spectrum model based on a linear superposition of three components namely: a thermal-Maxwellian distribution, an epithermal (1/E behavior) and a kind of a Watt fission and Evaporation models to represent the fast component, were compared to those issued from MAXED and GRAVEL assuming a Monte Carlo default spectrum. Through the selection of new upper limits for some free parameters, taking into account the physical characteristics of the irradiation source, of both considered models, good agreement was obtained for investigated integral quantities i.e. fluence rate and ambient dose equivalent rate compared to MAXED and GRAVEL results. The difference was generally below 4% for investigated parameters suggesting, thereby, the reliability of the proposed models. In the second step, the Bayesian results obtained from the previous calculations were used, as initial guess spectra, for the traditional unfolding codes, MAXED and GRAVEL to derive the solution spectra. Here again the results were in very good agreement, confirming the stability of the Bayesian solution.
Iterative discrete ordinates solution of the equation for surface-reflected radiance
NASA Astrophysics Data System (ADS)
Radkevich, Alexander
2017-11-01
This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.
Explicitly computing geodetic coordinates from Cartesian coordinates
NASA Astrophysics Data System (ADS)
Zeng, Huaien
2013-04-01
This paper presents a new form of quartic equation based on Lagrange's extremum law and a Groebner basis under the constraint that the geodetic height is the shortest distance between a given point and the reference ellipsoid. A very explicit and concise formulae of the quartic equation by Ferrari's line is found, which avoids the need of a good starting guess for iterative methods. A new explicit algorithm is then proposed to compute geodetic coordinates from Cartesian coordinates. The convergence region of the algorithm is investigated and the corresponding correct solution is given. Lastly, the algorithm is validated with numerical experiments.
Williams, Naomi J; Hill, Elizabeth M; Ng, Siaw Yein; Martin, Richard M; Metcalfe, Chris; Donovan, Jenny L; Evans, Simon; Hughes, Laura J; Davies, Charlotte F; Hamdy, Freddie C; Neal, David E; Turner, Emma L
2015-01-23
In cancer screening trials where the primary outcome is target cancer-specific mortality, the unbiased determination of underlying cause of death (UCD) is crucial. To minimise bias, the UCD should be independently verified by expert reviewers, blinded to death certificate data and trial arm. We investigated whether standardising the information submitted for UCD assignment in a population-based randomised controlled trial of prostate-specific antigen (PSA) testing for prostate cancer reduced the reviewers' ability to correctly guess the trial arm. Over 550 General Practitioner (GP) practices (>415,000 men aged 50-69 years) were cluster-randomised to PSA testing (intervention arm) or the National Health Service (NHS) prostate cancer risk management programme (control arm) between 2001 and 2007. Assignment of UCD was by independent reviews of researcher-written clinical vignettes that masked trial arm and death certificate information. A period of time after the process began (the initial phase), we analysed whether the reviewers could correctly identify trial arm from the vignettes, and the reasons for their choice. This feedback led to further standardisation of information (second phase), after which we re-assessed the extent of correct identification of trial arm. 1099 assessments of 509 vignettes were completed by January 2014. In the initial phase (n = 510 assessments), reviewers were unsure of trial arm in 33% of intervention and 30% of control arm assessments and were influenced by symptoms at diagnosis, PSA test result and study-specific criteria. In the second phase (n = 589), the respective proportions of uncertainty were 45% and 48%. The percentage of cases whereby reviewers were unable to determine the trial arm was greater following the standardisation of information provided in the vignettes. The chances of a correct guess and an incorrect guess were equalised in each arm, following further standardisation. It is possible to mask trial arm from cause of death reviewers, by using their feedback to standardise the information submitted to them. ISRCTN92187251.
NASA Technical Reports Server (NTRS)
Savage, M.; Mackulin, M. J.; Coe, H. H.; Coy, J. J.
1991-01-01
Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.
Arc-Length Continuation and Multi-Grid Techniques for Nonlinear Elliptic Eigenvalue Problems,
1981-03-19
size of the finest grid. We use the (AM) adaptive version of the Cycle C algorithm , unless otherwise stated. The first modified algorithm is the...by computing the derivative, uk, at a known solution and use it to get a better initial guess for the next value of X in a predictor - corrector fashion...factorization of the Jacobian Gu computed already in the Newton step. Using such a predictor - corrector method will often allow us to take a much bigger step
Children's Developing Understanding of Mental Verbs: Remember, Know, and Guess.
ERIC Educational Resources Information Center
Johnson, Carl Nils; Wellman, Henry M.
1980-01-01
Preschoolers interpreted mental verbs with respect to their mental state in contrast to external state. These children were nontheless ignorant of definitive distinctions between the mental verbs, completely confusing cases of remembering, knowing, and guessing. (Author/RH)
The role of guessing and boundaries on date estimation biases.
Lee, Peter James; Brown, Norman R
2004-08-01
This study investigates the causes of event-dating biases. Two hundred participants provided knowledge ratings and date estimates for 64 news events. Four independent groups dated the same events under different boundary constraints. Analysis across all responses showed that forward telescoping decreased with boundary age, concurring with the boundary-effects model. With guesses removed from the data set, backward telescoping was greatly reduced, but forward telescoping was unaffected by boundaries. This dissociation indicates that multiple factors (e.g., guessing and reconstructive strategies) are responsible for different dating biases and argue against a boundary explanation of forward telescoping.
Beyond semantic accuracy: preschoolers evaluate a speaker's reasons.
Koenig, Melissa A
2012-01-01
Children's sensitivity to the quality of epistemic reasons and their selective trust in the more reasonable of 2 informants was investigated in 2 experiments. Three-, 4-, and 5-year-old children (N = 90) were presented with speakers who stated different kinds of evidence for what they believed. Experiment 1 showed that children of all age groups appropriately judged looking, reliable testimony, and inference as better reasons for belief than pretense, guessing, and desiring. Experiment 2 showed that 3- and 4-year-old children preferred to seek and accept new information from a speaker who was previously judged to use the "best" way of thinking. The findings demonstrate that children distinguish certain good from bad reasons and prefer to learn from those who showcased good reasoning in the past. © 2012 The Author. Child Development © 2012 Society for Research in Child Development, Inc.
Peltier, Chad; Becker, Mark W
2017-05-01
Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.
Using patients' narratives to reveal gender stereotypes among medical students.
Andersson, Jenny; Salander, Pär; Hamberg, Katarina
2013-07-01
Gender bias exists in patient treatment, and, like most people, health care providers harbor gender stereotypes. In this study, the authors examined the gender stereotypes that medical students hold about patients. In 2005, in Umeå, Sweden, the authors collected 81 narratives written by patients who had undergone cancer treatment; all information that might reveal the patients' gender was removed from the texts. Eighty-seven medical students read 40 or 41 narratives each, guessed the patient's gender, and explained their guess. The authors analyzed the students' explanations qualitatively and quantitatively to reveal the students' gender stereotypes and to determine whether those stereotypes had any predictive value for correctly guessing a patient's gender. The students' explanations contained 21 categories of justifications, 12 of which were significantly associated with the students guessing one gender or the other. Only three categories successfully predicted a correct identification of gender; two categories were more often associated with incorrect guesses. Medical students enter their training program with culturally shared stereotypes about male and female patients that could cause bias during their future careers as physicians. To prevent this, medical curricula must address gender stereotypes and their possible consequences. The impact of implicit stereotypes must be included in discussions about gender bias in health care.
Immediate Feedback Assessment Technique in a Chemistry Classroom
NASA Astrophysics Data System (ADS)
Taylor, Kate R.
The Immediate Feedback Assessment Technique, or IFAT, is a new testing system that turns a student's traditional multiple-choice testing into a chance for hands-on learning; and provides teachers with an opportunity to obtain more information about a student's knowledge during testing. In the current study we wanted to know if: When students are given the second-chance afforded by the IFAT system, are they guessing or using prior knowledge when making their second chance choice. Additionally, while there has been some adaptation of this testing system in non-science disciplines, we wanted to study if the IFAT-system would be well- received among faculty in the sciences, more specifically chemistry faculty. By comparing the students rate of success on second-chance afforded by the IFAT-system versus the statistical likelihood of guessing correctly, statistical analysis was used to determine if we observed enough students earning the second-chance points to reject the likelihood that students were randomly guessing. Our data analysis revealed that is statistically highly unlikely that students were only guessing when the IFAT system was utilized. (It is important to note that while we can find that students are getting the answer correct at a much higher rate than random guessing we can never truly know if every student is using thought or not.).
Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy
NASA Astrophysics Data System (ADS)
Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.
2013-03-01
We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.
Guessing imagined and live chance events: adults behave like children with live events.
Robinson, E J; Pendle, J E C; Rowley, M G; Beck, S R; McColgan, K L T
2009-11-01
An established finding is that adults prefer to guess before rather than after a chance event has happened. This is interpreted in terms of aversion to guessing when relatively incompetent: After throwing, the fall could be known. Adults (N=71, mean age 18;11, N=28, mean age 48;0) showed this preference with imagined die-throwing as in the published studies. With live die-throwing, children (N=64, aged 6 and 8 years; N=50, aged 5 and 6 years) and 15-year-olds (N=93, 46) showed the opposite preference, as did 17 adults. Seventeen-year-olds (N=82) were more likely to prefer to guess after throwing with live rather than imagined die-throwing. Reliance on imagined situations in the literature on decision-making under uncertainty ignores the possibility that adults imagine inaccurately how they would really feel: After a real die has been thrown, adults, like children, may feel there is less ambiguity about the outcome.
Implicit recognition based on lateralized perceptual fluency.
Vargas, Iliana M; Voss, Joel L; Paller, Ken A
2012-02-06
In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.
Andrich, David; Marais, Ida; Humphry, Stephen Mark
2015-01-01
Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The consequence is that the proficiencies of the more proficient students are increased relative to those of the less proficient. Not controlling the guessing bias underestimates the progress of students across 7 years of schooling with important educational implications. PMID:29795871
Multiple-choice examinations: adopting an evidence-based approach to exam technique.
Hammond, E J; McIndoe, A K; Sansome, A J; Spargo, P M
1998-11-01
Negatively marked multiple-choice questions (MCQs) are part of the assessment process in both the Primary and Final examinations for the fellowship of the Royal College of Anaesthetists. It is said that candidates who guess will lose marks in the MCQ paper. We studied candidates attending a pre-examination revision course and have shown that an evaluation of examination technique is an important part of an individual's preparation. All candidates benefited substantially from backing their educated guesses while only 3 out of 27 lost marks from backing their wild guesses. Failure to appreciate the relationship between knowledge and technique may significantly affect a candidate's performance in the examination.
ERIC Educational Resources Information Center
Sezin, Fatin
2009-01-01
It is instructive and interesting to find hidden numbers by using different positional numeration systems. Most of the present guessing techniques use the binary system expressed as less-than, greater-than or present-absent type information. This article describes how, by employing four cards having integers 1-64 written in different colours, one…
Orbital Battleship: A Guessing Game to Reinforce Atomic Structure
ERIC Educational Resources Information Center
Kurushkin, Mikhail; Mikhaylenko, Maria
2016-01-01
A competitive educational guessing game "Orbital Battleship" which reinforces Madelung's and Hund's rules, values of quantum numbers, and understanding of periodicity was designed. The game develops strategic thinking, is not time-consuming, requires minimal preparation and supervision, and is an efficient and fun alternative to more…
The effect of unsuccessful retrieval on children's subsequent learning.
Carneiro, Paula; Lapa, Ana; Finn, Bridgid
2018-02-01
It is well known that successful retrieval enhances subsequent adults' learning by promoting long-term retention. Recent research has also found benefits from unsuccessful retrieval, but the evidence is not as clear-cut when the participants are children. In this study, we employed a methodology based on guessing-the weak associate paradigm-to test whether children can learn from generated errors or whether errors are harmful for learning. We tested second- and third-grade children in Experiment 1 and tested preschool and kindergarten children in Experiment 2. With slight differences in the method, in both experiments children heard the experimenter saying one word (cue) and were asked to guess an associate word (guess condition) or to listen to the correspondent target-associated word (study condition), followed by corrective feedback in both conditions. At the end of the guessing phase, the children undertook a cued-recall task in which they were presented with each cue and were asked to say the corrected target. Together, the results showed that older children-those in kindergarten and early elementary school-benefited from unsuccessful retrieval. Older children showed more correct responses and fewer errors in the guess condition. In contrast, preschoolers produced similar levels of correct and error responses in the two conditions. In conclusion, generating errors seems to be beneficial for future learning of children older than 5years. Copyright © 2017 Elsevier Inc. All rights reserved.
Minimal gain marching schemes: searching for unstable steady-states with unsteady solvers
NASA Astrophysics Data System (ADS)
de S. Teixeira, Renan; S. de B. Alves, Leonardo
2017-12-01
Reference solutions are important in several applications. They are used as base states in linear stability analyses as well as initial conditions and reference states for sponge zones in numerical simulations, just to name a few examples. Their accuracy is also paramount in both fields, leading to more reliable analyses and efficient simulations, respectively. Hence, steady-states usually make the best reference solutions. Unfortunately, standard marching schemes utilized for accurate unsteady simulations almost never reach steady-states of unstable flows. Steady governing equations could be solved instead, by employing Newton-type methods often coupled with continuation techniques. However, such iterative approaches do require large computational resources and very good initial guesses to converge. These difficulties motivated the development of a technique known as selective frequency damping (SFD) (Åkervik et al. in Phys Fluids 18(6):068102, 2006). It adds a source term to the unsteady governing equations that filters out the unstable frequencies, allowing a steady-state to be reached. This approach does not require a good initial condition and works well for self-excited flows, where a single nonzero excitation frequency is selected by either absolute or global instability mechanisms. On the other hand, it seems unable to damp stationary disturbances. Furthermore, flows with a broad unstable frequency spectrum might require the use of multiple filters, which delays convergence significantly. Both scenarios appear in convectively, absolutely or globally unstable flows. An alternative approach is proposed in the present paper. It modifies the coefficients of a marching scheme in such a way that makes the absolute value of its linear gain smaller than one within the required unstable frequency spectra, allowing the respective disturbance amplitudes to decay given enough time. These ideas are applied here to implicit multi-step schemes. A few chosen test cases shows that they enable convergence toward solutions that are unstable to stationary and oscillatory disturbances, with either a single or multiple frequency content. Finally, comparisons with SFD are also performed, showing significant reduction in computer cost for complex flows by using the implicit multi-step MGM schemes.
Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies
ERIC Educational Resources Information Center
Wainer, Howard
2011-01-01
"Uneducated Guesses" challenges everything our policymakers thought they knew about education and education reform, from how to close the achievement gap in public schools to admission standards for top universities. In this explosive book, Howard Wainer uses statistical evidence to show why some of the most widely held beliefs in…
ERIC Educational Resources Information Center
Housen, Monica
2017-01-01
In this article, Monica Housen describes how she uses Guess the Number of . . . , a game that develops estimation skills and persistence to provide a fun, to provide a meaningful experience for her high school students. Each week she displays objects in a clear plastic container, like those for pretzels sold in bulk. Students enter a…
σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States
NASA Astrophysics Data System (ADS)
Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy
The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1993-01-01
This report presents the formulation of the optimal low- and medium-thrust orbit transfer control problem and methods for numerical solution of the problem. The problem formulation is for final mass maximization and allows for second-harmonic oblateness, atmospheric drag, and three-dimensional, non-coplanar, non-aligned elliptic terminal orbits. We setup some examples to demonstrate the ability of two indirect methods to solve the resulting TPBVP's. The methods demonstrated are the multiple-point shooting method as formulated in H. J. Oberle's subroutine BOUNDSCO, and the minimizing boundary-condition method (MBCM). We find that although both methods can converge solutions, there are trade-offs to using either method. BOUNDSCO has very poor convergence for guesses that do not exhibit the correct switching structure. MBCM, however, converges for a wider range of guesses. However, BOUNDSCO's multi-point structure allows more freedom in quesses by increasing the node points as opposed to only quessing the initial state in MBCM. Finally, we note an additional drawback for BOUNDSCO: the routine does not supply information to the users routines for switching function polarity but only the location of a preset number of switching points.
Subjective qualities of memories associated with the picture superiority effect in schizophrenia.
Huron, Caroline; Danion, Jean-Marie; Rizzo, Lydia; Killofer, Valérie; Damiens, Annabelle
2003-02-01
Patients with schizophrenia (n = 24) matched with 24 normal subjects were presented with both words and pictures. On a recognition memory task, they were asked to give remember, know, or guess responses to items that were recognized on the basis of conscious recollection, familiarity, or guessing, respectively. Compared with normal subjects, patients exhibited a lower picture superiority effect selectively related to remember responses. Unlike normal subjects, they did not exhibit any word superiority effect in relation to guess responses; this explains why the overall picture superiority effect appeared to be intact. These results emphasize the need to take into account the subjective states of awareness when analyzing memory impairments in schizophrenia.
Kuhlmann, Beatrice G; Touron, Dayna R
2011-03-01
While episodic memory declines with age, metacognitive monitoring is spared. The current study explored whether older adults can use their preserved metacognitive knowledge to make source guesses in the absence of source memory. Through repetition, words from two sources (italic vs. bold text type) differed in memorability. There were no age differences in monitoring this difference despite an age difference in memory. Older adults used their metacognitive knowledge to make source guesses but showed a deficit in varying their source guessing based on word recognition. Therefore, older adults may not fully benefit from metacognitive knowledge about sources in source monitoring. (c) 2011 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
Does Incorrect Guessing Impair Fact Learning?
ERIC Educational Resources Information Center
Kang, Sean H. K.; Pashler, Harold; Cepeda, Nicholas J.; Rohrer, Doug; Carpenter, Shana K.; Mozer, Michael C.
2011-01-01
Taking a test has been shown to produce enhanced retention of the retrieved information. On tests, however, students often encounter questions the answers for which they are unsure. Should they guess anyway, even if they are likely to answer incorrectly? Or are errors engrained, impairing subsequent learning of the correct answer? We sought to…
A New Procedure for Detection of Students' Rapid Guessing Responses Using Response Time
ERIC Educational Resources Information Center
Guo, Hongwen; Rios, Joseph A.; Haberman, Shelby; Liu, Ou Lydia; Wang, Jing; Paek, Insu
2016-01-01
Unmotivated test takers using rapid guessing in item responses can affect validity studies and teacher and institution performance evaluation negatively, making it critical to identify these test takers. The authors propose a new nonparametric method for finding response-time thresholds for flagging item responses that result from rapid-guessing…
ERIC Educational Resources Information Center
Holster, Trevor A.; Lake, J.
2016-01-01
Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…
Analyzing Algebraic Thinking Using "Guess My Number" Problems
ERIC Educational Resources Information Center
Patton, Barba; De Los Santos, Estella
2012-01-01
The purpose of this study was to assess student knowledge of numeric, visual and algebraic representations. A definite gap between arithmetic and algebra has been documented in the research. The researchers' goal was to identify a link between the two. Using four "Guess My Number" problems, seventh and tenth grade students were asked to write…
An Effectiveness Index and Profile for Instructional Media.
ERIC Educational Resources Information Center
Bond, Jack H.
A scale was developed for judging the relative value of various media in teaching children. Posttest scores were partitioned into several components: error, prior knowledge, guessing, and gain from the learning exercise. By estimating the amounts of prior knowledge, guessing, and error, and then subtracting these from the total score, an index of…
The Effect of Testing Condition on Word Guessing in Elementary School Children
ERIC Educational Resources Information Center
Mannamaa, Mairi; Kikas, Eve; Raidvee, Aire
2008-01-01
Elementary school children's word guessing is studied, and the results from individual and collective testing conditions are compared. The participants are 764 students from the second, third, and fourth grades (ages 8-11, 541 students from mainstream regular classes and 223 students with learning disabilities). About half of these students are…
A Two-Parameter Latent Trait Model. Methodology Project.
ERIC Educational Resources Information Center
Choppin, Bruce
On well-constructed multiple-choice tests, the most serious threat to measurement is not variation in item discrimination, but the guessing behavior that may be adopted by some students. Ways of ameliorating the effects of guessing are discussed, especially for problems in latent trait models. A new item response model, including an item parameter…
Shape optimization of self-avoiding curves
NASA Astrophysics Data System (ADS)
Walker, Shawn W.
2016-04-01
This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.
Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P
2017-03-01
In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method. Copyright © 2017 Elsevier B.V. All rights reserved.
Mathematics in the Making: Mapping Verbal Discourse in Polya's "Let Us Teach Guessing" Lesson
ERIC Educational Resources Information Center
Truxaw, Mary P.; DeFranco, Thomas C.
2007-01-01
This paper describes a detailed analysis of verbal discourse within an exemplary mathematics lesson--that is, George Polya teaching in the Mathematics Association of America [MAA] video classic, "Let Us Teach Guessing" (1966). The results of the analysis reveal an inductive model of teaching that represents recursive cycles rather than linear…
ERIC Educational Resources Information Center
Moore, Alicia L.
2007-01-01
The importance of multiculturalism in the aftermath of Hurricane Katrina can be illustrated through a comparative view of the 1967 controversial, seminal, and Academy Award winning film, "Guess Who's Coming to Dinner". In the film, a multicultural cast starred in a groundbreaking tale of interracial marriage--then still illegal in some United…
ERIC Educational Resources Information Center
Fernie, David E.; DeVries, Rheta
This research study tests Selman's (1980) hypothesis that different games pull players toward particular kinds of reasoning through a developmental comparison of children's reasoning in two games, Tic Tac Toe and the Guessing Game. The present study focuses on two basic questions and their educational implications: (1) What differences and…
ERIC Educational Resources Information Center
Mount, Robert E.; Schumacker, Randall E.
1998-01-01
A Monte Carlo study was conducted using simulated dichotomous data to determine the effects of guessing on Rasch item fit statistics and the Logit Residual Index. Results indicate that no significant differences were found between the mean Rasch item fit statistics for each distribution type as the probability of guessing the correct answer…
The Effect of Guessing on Item Reliability under Answer-Until-Correct Scoring
ERIC Educational Resources Information Center
Kane, Michael; Moloney, James
1978-01-01
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
A Response to Holster and Lake Regarding Guessing and the Rasch Model
ERIC Educational Resources Information Center
Stewart, Jeffrey; McLean, Stuart; Kramer, Brandon
2017-01-01
Stewart questioned vocabulary size estimation methods proposed by Beglar and Nation for the Vocabulary Size Test, further arguing Rasch mean square (MSQ) fit statistics cannot determine the proportion of random guesses contained in the average learner's raw score, because the average value will be near 1 by design. He illustrated this by…
"A Spinach with a V on It": What 3-Year-Olds See in Standard and Enhanced Blissymbols.
ERIC Educational Resources Information Center
Raghavendra, Parimala; Fristoe, Macalyne
1990-01-01
Standard or enhanced Blissymbols, designed to represent familiar actions, attributes, and objects, were shown to 20 3 year olds, who guessed their meaning. The number of their guesses that referred to the enhancements was twice as great as the number that referred to the standard Blissymbol base. (Author/JDD)
ERIC Educational Resources Information Center
Bayen, Ute J.; Kuhlmann, Beatrice G.
2011-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source-guessing probabilities to the perceived contingency…
Network Authentication Protocol Studies
2009-04-01
the 37th Annual Hawaii International Conference on System Sciences (HICSS’04), 2004. [86] R . Corin, S. Malladi , J. Alves-Foss, and S. Etalle. Guess...requirement work products Corin03a [Corin03a] R . Corin, S. Malladi , J. Alves-Foss, and S. Etalle. Guess what? Here is a new tool that finds some new guessing...Cryptosystem………………………………………………………………… 7 Figure 3.1: A Bundle……………………………………………………………………….. 43 Figure 5.1: Penetrator strands combining a) F, R strands
sparse-msrf:A package for sparse modeling and estimation of fossil-fuel CO2 emission fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-10-06
The software is used to fit models of emission fields (e.g., fossil-fuel CO2 emissions) to sparse measurements of gaseous concentrations. Its primary aim is to provide an implementation and a demonstration for the algorithms and models developed in J. Ray, V. Yadav, A. M. Michalak, B. van Bloemen Waanders and S. A. McKenna, "A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions", accepted, Geoscientific Model Development, 2014. The software can be used to estimate emissions of non-reactive gases such as fossil-fuel CO2, methane etc. The software uses a proxy of the emission field beingmore » estimated (e.g., for fossil-fuel CO2, a population density map is a good proxy) to construct a wavelet model for the emission field. It then uses a shrinkage regression algorithm called Stagewise Orthogonal Matching Pursuit (StOMP) to fit the wavelet model to concentration measurements, using an atmospheric transport model to relate emission and concentration fields. Algorithmic novelties described in the paper above (1) ensure that the estimated emission fields are non-negative, (2) allow the use of guesses for emission fields to accelerate the estimation processes and (3) ensure that under/overestimates in the guesses do not skew the estimation.« less
The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).
Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S
2016-12-01
The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.
Ultrasonic prediction of term birth weight in Hispanic women. Accuracy in an outpatient clinic.
Nahum, Gerard G; Pham, Krystle Q; McHugh, John P
2003-01-01
To investigate the accuracy of ultrasonic fetal biometric algorithms for estimating term fetal weight. Ultrasonographic fetal biometric assessments were made in 74 Hispanic women who delivered at 37-42 weeks of gestation. Measurements were taken of the fetal biparietal diameter, head circumference, abdominal circumference and femur length. Twenty-seven standard fetal biometric algorithms were assessed for their accuracy in predicting fetal weight. Results were compared to those obtained by merely guessing the mean term birth weight in each case. The correlation between ultrasonically predicted and actual birth weights ranged from 0.52 to 0.79. The different ultrasonic algorithms estimated fetal weight to within +/- 8.6-15.0% (+/- 295-520 g) of actual birth weight as compared with +/- 13.6% (+/- 449 g) for guessing the mean birth weight in each case (mean +/- SD). The mean absolute prediction errors for 17 of the ultrasonic equations (63%) were superior to those obtained by guessing the mean birth weight by 3.2-5.0% (96-154 g) (P < .05). Fourteen algorithms (52%) were more accurate for predicting fetal weight to within +/- 15%, and 20 algorithms (74%) were more accurate for predicting fetal weight to within +/- 10% of actual birth weight than simply guessing the mean birth weight (P < .05). Ten ultrasonic equations (37%) showed significant utility for predicting fetal weight > 4,000 g (likelihood ratio > 5.0). Term fetal weight predictions using the majority of sonographic fetal biometric equations are more accurate, by up to 154 g and 5%, than simply guessing the population-specific mean birth weight.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.
2014-02-01
To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.
Validation of AIRS Retrievals of CO2 via Comparison to In Situ Measurements
NASA Technical Reports Server (NTRS)
Olsen, Edward T.; Chahine, Moustafa T.; Chen, Luke L.; Jiang, Xun; Pagano, Thomas S.; Yung, Yuk L.
2008-01-01
Topics include AIRS on Aqua, 2002-present with discussion about continued operation to 2011 and beyond and background, including spectrum, weighting functions, and initialization; comparison with aircraft and FTIR measurements in Masueda (CONTRAIL) JAL flask measurements, Park Falls, WI FTIR, Bremen, GDF, and Spitsbergen, Norway; AIRS retrievals over addition FTIR sites in Darwin, AU and Lauder, NZ; and mid-tropospheric carbon dioxide weather and contribution from major surface sources. Slide titles include typical AIRS infrared spectrum, AIRS sensitivity for retrieving CO2 profiles, independence of CO2 solution with respect to the initial guess, available in situ measurements for validation and comparison, comparison of collocated V1.5x AIRS CO2 (N_coll greater than or equal to 9) with INTEX-NA and SPURT;
ERIC Educational Resources Information Center
Loiseau, Mathieu; Hallal, Racha; Ballot, Pauline; Gazidedja, Ada
2016-01-01
In this paper, we present a learning game designed according to a strategy focusing on favouring the learners' "playful attitude". The game's modalities pertain to what we might call "guessing games". The chosen avatar of such guessing games both exists as learning and Commercial Off The Shelf (COTS) board games. We explain in…
ERIC Educational Resources Information Center
Drabinová, Adéla; Martinková, Patrícia
2017-01-01
In this article we present a general approach not relying on item response theory models (non-IRT) to detect differential item functioning (DIF) in dichotomous items with presence of guessing. The proposed nonlinear regression (NLR) procedure for DIF detection is an extension of method based on logistic regression. As a non-IRT approach, NLR can…
ERIC Educational Resources Information Center
Ibbett, Nicole L.; Wheldon, Brett J.
2016-01-01
In 2014 Central Queensland University (CQU) in Australia banned the use of multiple choice questions (MCQs) as an assessment tool. One of the reasons given for this decision was that MCQs provide an opportunity for students to "pass" by merely guessing their answers. The mathematical likelihood of a student passing by guessing alone can…
Improving Preschoolers' Recognition Memory for Faces with Orienting Information.
ERIC Educational Resources Information Center
Montepare, Joann M.
To determine whether preschool children's memory for unfamiliar faces could be facilitated by giving them orienting information about faces, 4- and 5-year-old subjects were told that they were going to play a guessing game in which they would be looking at faces and guessing which ones they had seen before. In study 1, 6 boys and 6 girls within…
An Alternative to the 3PL: Using Asymmetric Item Characteristic Curves to Address Guessing Effects
ERIC Educational Resources Information Center
Lee, Sora; Bolt, Daniel M.
2018-01-01
Both the statistical and interpretational shortcomings of the three-parameter logistic (3PL) model in accommodating guessing effects on multiple-choice items are well documented. We consider the use of a residual heteroscedasticity (RH) model as an alternative, and compare its performance to the 3PL with real test data sets and through simulation…
ERIC Educational Resources Information Center
Mongillo, Geraldine; Wilder, Hilary
2012-01-01
This qualitative study focused on at-risk college freshmen's ability to read and write expository text using game-like, online expository writing activities. These activities required participants to write descriptions of a target object so that peers could guess what the object was, after which they were given the results of those guesses as…
Grade of Membership Response Time Model for Detecting Guessing Behaviors
ERIC Educational Resources Information Center
Pokropek, Artur
2016-01-01
A response model that is able to detect guessing behaviors and produce unbiased estimates in low-stake conditions using timing information is proposed. The model is a special case of the grade of membership model in which responses are modeled as partial members of a class that is affected by motivation and a class that responds only according to…
A monogamy-of-entanglement game with applications to device-independent quantum cryptography
NASA Astrophysics Data System (ADS)
Tomamichel, Marco; Fehr, Serge; Kaniewski, Jędrzej; Wehner, Stephanie
2013-10-01
We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement. We explore several applications of this result. Firstly, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Secondly, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information.
Designing stellarator coils by a modified Newton method using FOCUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
The terminal area automated path generation problem
NASA Technical Reports Server (NTRS)
Hsin, C.-C.
1977-01-01
The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.
A multi-frequency iterative imaging method for discontinuous inverse medium problem
NASA Astrophysics Data System (ADS)
Zhang, Lei; Feng, Lixin
2018-06-01
The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.
Nonlinear equation of the modes in circular slab waveguides and its application.
Zhu, Jianxin; Zheng, Jia
2013-11-20
In this paper, circularly curved inhomogeneous waveguides are transformed into straight inhomogeneous waveguides first by a conformal mapping. Then, the differential transfer matrix method is introduced and adopted to deduce the exact dispersion relation for modes. This relation itself is complex and difficult to solve, but it can be approximated by a simpler nonlinear equation in practical applications, which is close to the exact relation and quite easy to analyze. Afterward, optimized asymptotic solutions are obtained and act as initial guesses for the following Newton's iteration. Finally, very accurate solutions are achieved in the numerical experiment.
Targeting Ballistic Lunar Capture Trajectories Using Periodic Orbits in the Sun-Earth CRTBP
NASA Technical Reports Server (NTRS)
Cooley, D.S.; Griesemer, Paul Ricord; Ocampo, Cesar
2009-01-01
A particular periodic orbit in the Earth-Sun circular restricted three body problem is shown to have the characteristics needed for a ballistic lunar capture transfer. An injection from a circular parking orbit into the periodic orbit serves as an initial guess for a targeting algorithm. By targeting appropriate parameters incrementally in increasingly complicated force models and using precise derivatives calculated from the state transition matrix, a reliable algorithm is produced. Ballistic lunar capture trajectories in restricted four body systems are shown to be able to be produced in a systematic way.
Excited-State Effective Masses in Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Fleming, Saul Cohen, Huey-Wen Lin
2009-10-01
We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.
A gradient based algorithm to solve inverse plane bimodular problems of identification
NASA Astrophysics Data System (ADS)
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Measuring and managing risk improves strategic financial planning.
Kleinmuntz, D N; Kleinmuntz, C E; Stephen, R G; Nordlund, D S
1999-06-01
Strategic financial risk assessment is a practical technique that can enable healthcare strategic decision makers to perform quantitative analyses of the financial risks associated with a given strategic initiative. The technique comprises six steps: (1) list risk factors that might significantly influence the outcomes, (2) establish best-guess estimates for assumptions regarding how each risk factor will affect its financial outcomes, (3) identify risk factors that are likely to have the greatest impact, (4) assign probabilities to assumptions, (5) determine potential scenarios associated with combined assumptions, and (6) determine the probability-weighted average of the potential scenarios.
Designing stellarator coils by a modified Newton method using FOCUS
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-06-01
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2018-03-22
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
NASA Astrophysics Data System (ADS)
Spessa, Allan; Forrest, Matthew; Werner, Christian; Steinkamp, Joerg; Hickler, Thomas
2013-04-01
Wildfire is a fundamental Earth System process. It is the most important disturbance worldwide in terms of area and variety of biomes affected; a major mechanism by which carbon is transferred from the land to the atmosphere (2-4 Pg per annum, equiv. 20-30% of global fossil fuel emissions over the last decade); and globally a significant source of particulate aerosols and trace greenhouse gases. Fire is also potentially important as a feedback in the climate system. If climate change favours more intense fire regimes, this would result in a net transfer of carbon from ecosystems to the atmosphere, as well as higher emissions, and under certain circumstances, increased troposphere ozone production- all contributing to positive climate-land surface feedbacks. Quantitative analysis of fire-vegetation-climate interactions has been held back until recently by a lack of consistent global data sets on fire, and by the underdeveloped state of dynamic vegetation-fire modelling. Dynamic vegetation-fire modelling is an essential part of our forecasting armory for examining the possible impacts of climate, fire regimes and land-use on ecosystems and emissions from biomass burning beyond the observation period, as part of future climate or paleo-climate studies. LPJ-GUESS is a process-based model of vegetation dynamics designed for regional to global applications. It combines features of the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM) with those of the General Ecosystem Simulator (GUESS) in a single, flexible modelling framework. The models have identical representations of eco-physiological and biogeochemical processes, including the hydrological cycle. However, they differ in the detail with which vegetation dynamics and canopy structure are simulated. Simplified, computationally efficient representations are used in the LPJ-DGVM, while LPJ-GUESS employs a gap-model approach, which better captures ecological succession and hence ecosystem changes due to disturbance such as fire. SPITFIRE (SPread and InTensity of FIRe and Emissions) mechanistically simulates the number of fires, area burnt, fire intensity, crown fires, fire-induced plant mortality, and emissions of carbon, trace gases and aerosols from biomass burning. Originally developed as an embedded model within LPJ-DGVM, SPITFIRE has since been coupled to LPJ-GUESS. However, neither LPJ-DGVM-SPITFIRE nor LPJ-GUESS-SPITFIRE has been fully benchmarked, especially in terms of how well each model simulates vegetation patterns and biomass in areas where fire is known to be important. This information is crucial if we are to have confidence in the models in forecasting fire, emissions from biomass burning and fire-climate impacts on ecosystems. Here we report on the benchmarking of the LPJ-GUESS-SPITFIRE model. We benchmarked LPJ-GUESS-SPITFIRE driven by a combination of daily reanalysis climate data (Sheffield 2012), monthly GFEDv3 burnt area data (1997-2009) (van der Werf et al. 2010) and long-term annual fire statistics (1901 to 2000) (Mouillot and Field 2005) against new Lidar-based biomass data for tropical forests and savannas (Saatchi et al. 2011; Baccini et al., 2012). Our new work has focused on revising the way GUESS simulates tree allometry, light penetration through the tree canopy and sapling recruitment, and how GUESS-SPITFIRE simulates fire-induced mortality, all based on recent literature, as well as a more explicit accounting of land cover change (JRC's GLC 2009). We present how these combined changes result in a much improved simulation of tree carbon across the tropics, including the Americas, Africa, Asia and Australia. Our results are compared with respect to more empirical-based approaches to calculating emissions from biomass burning. We discuss our findings in terms of improved forecasting of fire, emissions from biomass burning and fire-climate impacts on ecosystems.
Modeller's attitude in catchment modelling: a comparative study
NASA Astrophysics Data System (ADS)
Battista Chirico, Giovanni
2010-05-01
Ten modellers have been invited to predict, independently from each other, the discharge of the artificial Chicken Creek catchment in North-East Germany for simulation period of three years, providing them only soil texture, terrain and meteorological data. No data concerning the discharge or other sources of state variables and fluxes within the catchment have been provided. Modellers had however the opportunity to visit the experimental catchment and inspect areal photos of the catchments since its initial development stage. This study has been a unique comparative study focussing on how different modellers deal with the key issues in predicting the discharge in ungauged catchments: 1) choice of the model structure; 2) identification of model parameters; 3) identification of model initial and boundary conditions. The first general lesson learned during this study was that the modeller is just part of the entire modelling process and has a major bearing on the model results, particularly in ungauged catchments where there are more degrees of freedom in making modelling decisions. Modellers' attitudes during the stages of the model implementation and parameterisation have been deeply influenced by their own experience from previous modelling studies. A common outcome was that modellers have been mainly oriented to apply process-based models able to exploit the available data concerning the physical properties of the catchment and therefore could be more suitable to cope with the lack of data concerning state variables or fluxes. The second general lesson learned during this study was the role of dominant processes. We believed that the modelling task would have been much easier in an artificial catchment, where heterogeneity were expected to be negligible and processes simpler, than in catchments that have evolved over a longer time period. The results of the models were expected to converge, and this would have been a good starting point to proceed for a model comparison in natural, more challenging catchments. This model comparison showed instead that even a small artificial catchment exhibits heterogeneities which lead to similar modelling problems as in natural catchments. We also verified that qualitative knowledge of the potential surface processes, such as that could be gained by visual inspection of the catchment (erosion marks, canopy features, soil crusting, ect.), have been vastly employed by the modellers to guess the dominant processes to be modelled and therefore to make choices on model structure and guesses of model parameters. The two lessons learned from this intercomparison study are closely linked. The experience of a modeller is crucial in the (subjective) process of deciding upon the dominant processes that seem to be sufficiently important to be incorporated into the model. On the other hand, the cumulated experience will also play an important role in how different pieces of evidence from, for example, field inspections, will modify the initial conceptual understanding.
Development of a Response Planner using the UCT Algorithm for Cyber Defense
2013-03-01
writer2l, guess passwd r2l, imap r2l, ipsweep probe, land dos, loadmodule u2r, multihop r2l, neptune dos, nmap probe, perl u2r, phf r2l, pod dos, portsweep...2646 10 pod 201 11 back 956 12 guess passwd 53 Item Type Count 13 ftp write 8 14 multihop 7 15 rootkit 10 16 buffer overflow 30 17 imap 11 18...pod 0 0 0 87 6 11 0 0 64 33 0 0 0 0 k = back 908 0 0 0 0 0 0 0 0 0 47 0 1 0 l = guess passwd 0 0 0 42 3 0 1 0 0 0 0 5 1 0 m = buffer overflow 0 0 17
Mathematical Modeling of Ultra-Superheated Steam Gasification
NASA Astrophysics Data System (ADS)
Xin, Fen
Pure steam gasification has been of interest in hydrogen production, but with the challenge of supplying heat for endothermic reactions. Traditional solutions included either combusting feedstocks at the price of decreasing carbon conversion ratio, or using costly heating apparatus. Therefore, a distributed gasifier with an Ultra-Superheated-Steam (USS) generator was invented, satisfying the heat requirement and avoiding carbon combustion in steam gasification. This project developed the first version of the Ultra-Superheated-Steam-Fluidization-Model (USSFM V1.0) for the USS gasifier. A stand-alone equilibrium combustion model was firstly developed to calculate the USS mixture, which was the input to the USSFM V1.0. Model development of the USSFM V1.0 included assumptions, governing equations, boundary conditions, supporting equations and iterative schemes of guessed values. There were three nested loops in the dense bed and one loop in the freeboard. The USSFM V1.0 included one main routine and twenty-four subroutines. The USSFM V1.0 was validated with experimental data from the Enercon USS gasifier. The calculated USS mixture had a trace of oxygen, validating the initial expectation of creating an oxygen-free environment in the gasifier. Simulations showed that the USS mixture could satisfy the gasification heat requirement without partial carbon combustion. The USSFM V1.0 had good predictions on the H2% in all tests, and on other variables at a level of the lower oxygen feed. Provided with higher oxygen feed, the USSFM V1.0 simulated hotter temperatures, higher CO% and lower CO2%. Errors were explained by assumptions of equilibrium combustion, adiabatic reactors, reaction kinetics, etc. By investigating specific modeling data, gas-particle convective heat transfers were found to be critical in energy balance equations of both emulsion gas and particles, while bubble size controlled both the mass and energy balance equations of bubble gas. Parametric study suggested a lower level of oxygen feed for higher content of hydrogen. However, too little oxygen would impede fluidization in the bed. The reasonability of iterative schemes and the stability of USSFM V1.0 were tested by the sensitivity analysis of two guessed values. Analytical Hierarchy Process analysis indicated that large-scale gasification is advantageous for hydrogen production but with impediments of high capital cost and CO2 emissions. This study manifested the USS gasifier offering the possibility of generating H2-rich and CO2-lean syngas in a much cheaper distributed way. Currently, the FORTRAN-based USSFM V1.0 had a good correlation with experimental data with a small oxygen feed. On the demand of wider applications, suggestions were proposed at last for the model improvement in future.
Length scales involved in decoherence of trapped bosons by buffer-gas scattering
NASA Astrophysics Data System (ADS)
Gilz, Lukas; Rico-Pérez, Luis; Anglin, James R.
2014-05-01
We ask and answer a basic question about the length scales involved in quantum decoherence: how far apart in space do two parts of a quantum system have to be before a common quantum environment decoheres them as if they were entirely separate? We frame this question specifically in a cold atom context. How far apart do two populations of bosons have to be before an environment of thermal atoms of a different species ("buffer gas") responds to their two particle numbers separately? An initial guess for this length scale is the thermal coherence length of the buffer gas; we show that a standard Born-Markov treatment partially supports this guess, but predicts only inverse-square saturation of decoherence rates with distance, and not the much more abrupt Gaussian behavior of the buffer gas's first-order coherence. We confirm this Born-Markov result with a more rigorous theory, based on an exact solution of a two-scatterer scattering problem, which also extends the result beyond weak scattering. Finally, however, we show that when interactions within the buffer-gas reservoir are taken into account, an abrupt saturation of the decoherence rate does occur, exponentially on the length scale of the buffer gas's mean free path.
Saccadic eye movements do not disrupt the deployment of feature-based attention.
Kalogeropoulou, Zampeta; Rolfs, Martin
2017-07-01
The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.
Accelerated gradient based diffuse optical tomographic image reconstruction.
Biswas, Samir Kumar; Rajan, K; Vasu, R M
2011-01-01
Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.
2017-12-01
This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.
Puzzler Solution: Just Making an Observation | Poster
Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff
ERIC Educational Resources Information Center
Juan, Wu Xiao; Abidin, Mohamad Jafre Zainol; Eng, Lin Siew
2013-01-01
This survey aims at studying the relationship between English vocabulary threshold and word guessing strategy that is used in reading comprehension learning among 80 pre-university Chinese students in Malaysia. T-test is the main statistical test for this research, and the collected data is analysed using SPSS. From the standard deviation test…
ERIC Educational Resources Information Center
Andrich, David; Marais, Ida; Humphry, Stephen Mark
2016-01-01
Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…
Puzzler Solution: Just Making an Observation | Poster
Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff Writer
ERIC Educational Resources Information Center
Panagiotakopoulos, Chris T.; Sarris, Menelaos E.
2013-01-01
The present study reports the basic characteristics of a game-like application entitled "Playing with Words-PwW". PwW is a single-user application where a word must be guessed given an anagram of that word. Anagrams are presented from a predefined word list and users can repeatedly try to guess the word, from which the anagram is…
Longenecker, Julia; Liu, Kristy; Chen, Eric Y H
2012-12-30
In an interactive guessing game, controls had higher performance and efficiency than patients with schizophrenia in correct trials. Patients' difficulties generating efficient questions suggest an increased taxation of working memory and an inability to engage an appropriate strategy, leading to impulsive behavior and reduced success. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Stabler, John R.; Johnson, Edward E.
Investigation of how children's responses to black and white objects reflect racial concepts is reported. One series of experiments asking Headstart children to guess which objects they liked or disliked were hidden in black or white boxes. Although white children guessed more often that positively evaluated objects were in white boxes, black…
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.
2014-07-01
To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2016-12-01
Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. These new instruments require novel approaches for processing imagery and separating surface and atmospheric signals. One approach is numerical source separation, which allows the determination of the underlying physical causes of observed signals. Improved source separation will enable hyperspectral imagery to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. We developed an Informed Non-negative Matrix Factorization (INMF) method for separating atmospheric and surface sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. We also explore methods to produce an initial guess of the spatial separation patterns. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO) with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric conditions, including high and low aerosol optical thickness and cloud cover, with only minor contributions from the ocean surfaces in order to isolate the contributions of the multiple atmospheric sources.
Detecting Patterns of Anomalies
2009-03-01
0.0057 0.9668 ± 0.0053 guess passwd 0.7316 ± 0.0133 0.7792 ± 0.0145 mailbomb 0.1782 ± 0.0104 0.2243 ± 0.014 neptune 0.9938 ± 0.003 0.9938 ± 0.003 smurf...1.0 ± 0.0 1.0 ± 0.0 0.727 ± 0.051 guess passwd 1.0 ± 0.0 1.0 ± 0.0 0.957 ± 0.016 1.0 ± 0.0 0.610 ± 0.045 mailbomb 0.788 ± 0.02 0.82 ± 0.023 0.276...0.951 ± 0.004 0.882 ± 0.021 0.215 ± 0.042 guess passwd 0.991 ± 0.002 0.773 ± 0.008 0.124 ± 0.005 0.804 ± 0.013 0.205 ± 0.041 mailbomb 0.587 ± 0.007
2011-08-31
2011 4 . TITLE AND SUBTITLE Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms 5a. CONTRACT...large numbers of hashed passwords (Booz Allen Hamilton, HBGary, Gawker, Sony Playstation , etc.), coupled with the availability of botnets that offer...when evaluating the strength of different password-composition policies. 4 . We investigate the effectiveness of entropy as a measure of password
ERIC Educational Resources Information Center
Bliss, Leonard B.
The aim of this study was to show that the superiority of corrected-for-guessing scores over number right scores as true score estimates depends on the ability of examinees to recognize situations where they can eliminate one or more alternatives as incorrect and to omit items where they would only be guessing randomly. Previous investigations…
ERIC Educational Resources Information Center
Pillow, Bradford H.; Hill, Valerie; Boyce, April; Stein, Catherine
2000-01-01
Three experiments investigated children's understanding of inference as a knowledge source. Most 4- to 6-year-olds did not rate a puppet as more certain of a toy's color after the puppet looked at the toy or inferred its color than they did after the puppet guessed the color. Most 8- and 9-year-olds distinguished inference and looking from…
NASA Technical Reports Server (NTRS)
Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira; Houston, Chapman; Liebowitz, Alisa; Baek, Seung; Radko, Joe; Zeide, Janet
1996-01-01
Scheduling has become an increasingly important element in today's society and workplace. Within the NASA environment, scheduling is one of the most frequently performed and challenging functions. Towards meeting NASA's scheduling needs, a research version of a generic expert scheduling system architecture and toolkit has been developed. This final report describes the development and testing of GUESS (Generically Used Expert Scheduling System).
Meyer, Miriam Magdalena; Buchner, Axel; Bell, Raoul
2016-09-01
The present study investigates age differences in the vulnerability to illusory correlations between fear-relevant stimuli and threatening information. Younger and older adults saw pictures of threatening snakes and nonthreatening fish, paired with threatening and nonthreatening context information ("poisonous" and "nonpoisonous") with a null contingency between animal type and poisonousness. In a source monitoring test, participants were required to remember whether an animal was associated with poisonousness or nonpoisonousness. Illusory correlations were implicitly measured via a multinomial model. One advantage of this approach is that memory and guessing processes can be assessed independently. An illusory correlation would be reflected in a higher probability of guessing that a snake rather than a fish was poisonous if the poisonousness of the animal was not remembered. Older adults showed evidence of illusory correlations in source guessing while younger adults did not; instead they showed evidence of probability matching. Moreover, snake fear was associated with increased vulnerability to illusory correlations in older adults. The findings confirm that older adults are more susceptible to fear-relevant illusory correlations than younger adults. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Memory and the Korsakoff syndrome: not remembering what is remembered.
d'Ydewalle, Géry; Van Damme, Ilse
2007-03-14
Following the distinction between involuntary unconscious memory, involuntary conscious memory, and intentional retrieval, the focus of the present paper is whether there is an impairment of involuntary conscious memory among Korsakoff patients. At study, participants generated associations versus counted the number of letters with enclosed spaces or the number of vowels in the target words (semantic versus perceptual processing). In the Direct tests, stems were to be used to retrieve the targets with either guessing or no guessing allowed; in the Opposition tests, the stems were to be completed with the first word that came to mind but using another word if that first word was a target word; and in the Indirect tests, no reference was made to the target words from the study phase. In the Direct tests, the performance of Korsakoff patients was not necessarily worse than the one of healthy controls, provided guessing was allowed. More critical for the Korsakoff patients was the deficient involuntary conscious memory. The deficiency explained the suppression failures in the Opposition tests, the absence of performance differences between the Indirect and Opposition tests, the absence of a beneficial effect in providing information about the status of the stem, the performance boost when allowed to guess, and the very low rate of "Know"/"Remember" responses.
Automated Tests for Telephone Telepathy Using Mobile Phones.
Sheldrake, Rupert; Smart, Pamela; Avraamides, Leonidas
2015-01-01
To carry out automated experiments on mobile phones to test for telepathy in connection with telephone calls. Subjects, aged from 10 to 83, registered online with the names and mobile telephone numbers of three or two senders. A computer selected a sender at random, and asked him to call the subject via the computer. The computer then asked the subject to guess the caller׳s name, and connected the caller and the subject after receiving the guess. A test consisted of six trials. The effects of subjects׳ sex and age and the effects of time delays on guesses. The proportion of correct guesses of the caller׳s name, compared with the 33.3% or 50% mean chance expectations. In 2080 trials with three callers there were 869 hits (41.8%), above the 33.3% chance level (P < 1 × 10(-15)). The hit rate in incomplete tests was 43.8% (P = .00003) showing that optional stopping could not explain the positive results. In 745 trials with two callers, there were 411 hits (55.2%), above the 50% chance level (P = .003). An analysis of the data made it very unlikely that cheating could explain the positive results. These experiments showed that automated tests for telephone telepathy can be carried out using mobile phones. Copyright © 2015 Elsevier Inc. All rights reserved.
Probing the solar core with low-degree p modes
NASA Astrophysics Data System (ADS)
Roxburgh, I. W.; Vorontsov, S. V.
2002-01-01
We address the question of what could be learned about the solar core structure if the seismic data were limited to low-degree modes only. The results of three different experiments are described. The first is the linearized structural inversion of the p-mode frequencies of a solar model modified slightly in the energy-generating core, using the original (unmodified) model as an initial guess. In the second experiment, we invert the solar p-mode frequencies measured in the 32-month subset of BiSON data (Chaplin et al. 1998), degraded with additional 0.1 μHz random errors, using a model of 2.6 Gyr age from the solar evolutionary sequence as an initial approximation. This second inversion is non-linear. In the third experiment, we compare the same set of BiSON frequencies with current reference solar model.
NASA Astrophysics Data System (ADS)
Ram, Paras; Joshi, Vimal Kumar; Sharma, Kushal; Walia, Mittu; Yadav, Nisha
2016-01-01
An attempt has been made to describe the effects of geothermal viscosity with viscous dissipation on the three dimensional time dependent boundary layer flow of magnetic nanofluids due to a stretchable rotating plate in the presence of a porous medium. The modelled governing time dependent equations are transformed a from boundary value problem to an initial value problem, and thereafter solved by a fourth order Runge-Kutta method in MATLAB with a shooting technique for the initial guess. The influences of mixed temperature, depth dependent viscosity, and the rotation strength parameter on the flow field and temperature field generated on the plate surface are investigated. The derived results show direct impact in the problems of heat transfer in high speed computer disks (Herrero et al. [1]) and turbine rotor systems (Owen and Rogers [2]).
Violante-Carvalho, Nelson
2005-12-01
Synthetic Aperture Radar (SAR) onboard satellites is the only source of directional wave spectra with continuous and global coverage. Millions of SAR Wave Mode (SWM) imagettes have been acquired since the launch in the early 1990's of the first European Remote Sensing Satellite ERS-1 and its successors ERS-2 and ENVISAT, which has opened up many possibilities specially for wave data assimilation purposes. The main aim of data assimilation is to improve the forecasting introducing available observations into the modeling procedures in order to minimize the differences between model estimates and measurements. However there are limitations in the retrieval of the directional spectrum from SAR images due to nonlinearities in the mapping mechanism. The Max-Planck Institut (MPI) scheme, the first proposed and most widely used algorithm to retrieve directional wave spectra from SAR images, is employed to compare significant wave heights retrieved from ERS-1 SAR against buoy measurements and against the WAM wave model. It is shown that for periods shorter than 12 seconds the WAM model performs better than the MPI, despite the fact that the model is used as first guess to the MPI method, that is the retrieval is deteriorating the first guess. For periods longer than 12 seconds, the part of the spectrum that is directly measured by SAR, the performance of the MPI scheme is at least as good as the WAM model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunakov, V. E., E-mail: bunakov@VB13190.spb.edu
A critical analysis of the present-day concept of chaos in quantum systems as nothing but a “quantum signature” of chaos in classical mechanics is given. In contrast to the existing semi-intuitive guesses, a definition of classical and quantum chaos is proposed on the basis of the Liouville–Arnold theorem: a quantum chaotic system featuring N degrees of freedom should have M < N independent first integrals of motion (good quantum numbers) specified by the symmetry of the Hamiltonian of the system. Quantitative measures of quantum chaos that, in the classical limit, go over to the Lyapunov exponent and the classical stabilitymore » parameter are proposed. The proposed criteria of quantum chaos are applied to solving standard problems of modern dynamical chaos theory.« less
Method for determining optimal supercell representation of interfaces
NASA Astrophysics Data System (ADS)
Stradi, Daniele; Jelver, Line; Smidstrup, Søren; Stokbro, Kurt
2017-05-01
The geometry and structure of an interface ultimately determines the behavior of devices at the nanoscale. We present a generic method to determine the possible lattice matches between two arbitrary surfaces and to calculate the strain of the corresponding matched interface. We apply this method to explore two relevant classes of interfaces for which accurate structural measurements of the interface are available: (i) the interface between pentacene crystals and the (1 1 1) surface of gold, and (ii) the interface between the semiconductor indium-arsenide and aluminum. For both systems, we demonstrate that the presented method predicts interface geometries in good agreement with those measured experimentally, which present nontrivial matching characteristics and would be difficult to guess without relying on automated structure-searching methods.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
A rigorous and simpler method of image charges
NASA Astrophysics Data System (ADS)
Ladera, C. L.; Donoso, G.
2016-07-01
The method of image charges relies on the proven uniqueness of the solution of the Laplace differential equation for an electrostatic potential which satisfies some specified boundary conditions. Granted by that uniqueness, the method of images is rightly described as nothing but shrewdly guessing which and where image charges are to be placed to solve the given electrostatics problem. Here we present an alternative image charges method that is based not on guessing but on rigorous and simpler theoretical grounds, namely the constant potential inside any conductor and the application of powerful geometric symmetries. The aforementioned required uniqueness and, more importantly, guessing are therefore both altogether dispensed with. Our two new theoretical fundaments also allow the image charges method to be introduced in earlier physics courses for engineering and sciences students, instead of its present and usual introduction in electromagnetic theory courses that demand familiarity with the Laplace differential equation and its boundary conditions.
Taboo: Working memory and mental control in an interactive task
Hansen, Whitney A.; Goldinger, Stephen D.
2014-01-01
Individual differences in working memory (WM) predict principled variation in tasks of reasoning, response time, memory, and other abilities. Theoretically, a central function of WM is keeping task-relevant information easily accessible while suppressing irrelevant information. The present experiment was a novel study of mental control, using performance in the game Taboo as a measure. We tested effects of WM capacity on several indices, including perseveration errors (repeating previous guesses or clues) and taboo errors (saying at least part of a taboo or target word). By most measures, high-span participants were superior to low-span participants: High-spans were better at guessing answers, better at encouraging correct guesses from teammates, and less likely to either repeat themselves or produce taboo clues. Differences in taboo errors occurred only in an easy control condition. The results suggest that WM capacity predicts behavior in tasks requiring mental control, extending this finding to an interactive group setting. PMID:19827699
The effect of guessing on the speech reception thresholds of children.
Moodley, A
1990-01-01
Speech audiometry is an essential part of the assessment of hearing impaired children and it is now widely used throughout the United Kingdom. Although instructions are universally agreed upon as an important aspect in the administration of any form of audiometric testing, there has been little, if any, research towards evaluating the influence which instructions that are given to a listener have on the Speech Reception Threshold obtained. This study attempts to evaluate what effect guessing has on the Speech Reception Threshold of children. A sample of 30 secondary school pupils between 16 and 18 years of age with normal hearing was used in the study. It is argued that the type of instruction normally used for Speech Reception Threshold in audiometric testing may not provide a sufficient amount of control for guessing and the implications of this, using data obtained in the study, are examined.
Quantifying the effects of social influence
Mavrodiev, Pavlin; Tessone, Claudio J.; Schweitzer, Frank
2013-01-01
How do humans respond to indirect social influence when making decisions? We analysed an experiment where subjects had to guess the answer to factual questions, having only aggregated information about the answers of others. While the response of humans to aggregated information is a widely observed phenomenon, it has not been investigated quantitatively, in a controlled setting. We found that the adjustment of individual guesses depends linearly on the distance to the mean of all guesses. This is a remarkable, and yet surprisingly simple regularity. It holds across all questions analysed, even though the correct answers differ by several orders of magnitude. Our finding supports the assumption that individual diversity does not affect the response to indirect social influence. We argue that the nature of the response crucially changes with the level of information aggregation. This insight contributes to the empirical foundation of models for collective decisions under social influence. PMID:23449043
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl
2016-09-15
We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.
Bayesian approach to analyzing holograms of colloidal particles.
Dimiduk, Thomas G; Manoharan, Vinothan N
2016-10-17
We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.
NASA Technical Reports Server (NTRS)
Pu, Zhao-Xia; Tao, Wei-Kuo
2004-01-01
An effort has been made at NASA/GSFC to use the Goddard Earth Observing system (GEOS) global analysis in generating the initial and boundary conditions for MM5/WRF simulation. This linkage between GEOS global analysis and MM5/WRF models has made possible for a few useful applications. As one of the sample studies, a series of MM5 simulations were conducted to test the sensitivity of initial and boundary conditions to MM5 simulated precipitation over the eastern; USA. Global analyses horn different operational centers (e.g., NCEP, ECMWF, I U ASA/GSFCj were used to provide first guess field and boundary conditions for MM5. Numerical simulations were performed for one- week period over the eastern coast areas of USA. the distribution and quantities of MM5 simulated precipitation were compared. Results will be presented in the workshop. In addition,other applications from recent and future studies will also be addressed.
SU-C-BRB-01: Automated Dose Deformation for Re-Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, S; Kainz, K; Li, X
Purpose: An objective of retreatment planning is to minimize dose to previously irradiated tissues. Conventional retreatment planning is based largely on best-guess superposition of the previous treatment’s isodose lines. In this study, we report a rigorous, automated retreatment planning process to minimize dose to previously irradiated organs at risk (OAR). Methods: Data for representative patients previously treated using helical tomotherapy and later retreated in the vicinity of the original disease site were retrospectively analyzed in an automated fashion using a prototype treatment planning system equipped with a retreatment planning module (Accuray, Inc.). The initial plan’s CT, structures, and planned dosemore » were input along with the retreatment CT and structure set. Using a deformable registration algorithm implemented in the module, the initially planned dose and structures were warped onto the retreatment CT. An integrated third-party sourced software (MIM, Inc.) was used to evaluate registration quality and to contour overlapping regions between isodose lines and OARs, providing additional constraints during retreatment planning. The resulting plan and the conventionally generated retreatment plan were compared. Results: Jacobian maps showed good quality registration between the initial plan and retreatment CTs. For a right orbit case, the dose deformation facilitated delineating the regions of the eyes and optic chiasm originally receiving 13 to 42 Gy. Using these regions as dose constraints, the new retreatment plan resulted in V50 reduction of 28% for the right eye and 8% for the optic chiasm, relative to the conventional plan. Meanwhile, differences in the PTV dose coverage were clinically insignificant. Conclusion: Automated retreatment planning with dose deformation and definition of previously-irradiated regions allowed for additional planning constraints to be defined to minimize re-irradiation of OARs. For serial organs that do not recover from radiation damage, this method provides a more precise and quantitative means to limit cumulative dose. This research is partially supported by Accuray, Inc.« less
NASA Astrophysics Data System (ADS)
Zhuang, Yufei; Huang, Haibin
2014-02-01
A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.
From Dynamic Global Vegetation Modelling to Real-World regional and local Application
NASA Astrophysics Data System (ADS)
Steinkamp, J.; Forrest, M.; Kamm, K.; Leiblein-Wild, M.; Pachzelt, A.; Werner, C.; Hickler, T.
2015-12-01
Dynamic (global) vegetation models (DGVM) can be applied to any spatial resolution on the local, national, continental and global scale given suitable climatic and geographic input forcing data. LPJ-GUESS, the main DGVM applied in our research group, uses the plant functional type (PFT) concept in the global setup with typically about 10-20 tree PFTs (subdivided into tropical, temperate and boreal) and two herbaceous PFTs by default. When modelling smaller spatial extents, such as continental (e.g. Europe/North America) national domains, or individual sites (e.g. Frankfurt, Germany), i.e. the scale of decision making, it becomes necessary to refine the PFT representation, the model initialization and validation and, in some case, to include additional processes. I will present examples of LPJ-GUESS applications at the continental to local scale performed by our working group including i.) a European simulation representing the main tree species and Mediterranean shrubs, ii.) a climate impact study for Turkey, iii.) coupled dynamic large grazer-vegetation modelling across Africa and, iv.) modelling an allergenic and in Europe invasive shrub (Ambrosia artemisiifolia), iv.) simulating water usage by an oak-pine forest stand near Frankfurt, and v.) stand specific differences in modelling at the FACE sites. Finally, I will present some thoughts on how to advance the models in terms of more detailed and realistic PFT or species parameterizations accounting for adaptive functional trait responses also within species.
How to Prevent Type-Flaw Guessing Attacks on Password Protocols
2003-01-01
How to prevent type-flaw guessing attacks on password protocols∗ Sreekanth Malladi , Jim Alves-Foss Center for Secure and Dependable Systems...respectively. R Retagging 〈−(t, f),+(t′, f)〉. The retagging strand captures the concept of receiving a message of one type and sending it, with a claim of a...referrees for insightful comments. Thanks are also due to Ricardo Corin for many helpful technical discus- sions. References [AN94] M. Abadi and R
Cleared Hot: A Forward Air Control (Airborne) Concepts Trainer
2006-09-01
list of high-level objectives imitating a detailed requirements document. In those cases, software developers are forced to make best guesses about...software developers are forced to make best guesses about how to meet those objectives. Is there a better method? We embarked on a project to create a...with participants at the end of an 18-month development cycle, we did the next best thing: Cleared Hot was taken to the mission subject matter
Non-penetrating sham needle, is it an adequate sham control in acupuncture research?
Lee, Hyangsook; Bang, Heejung; Kim, Youngjin; Park, Jongbae; Lee, Sangjae; Lee, Hyejung; Park, Hi-Joon
2011-01-01
This study aimed to determine whether a non-penetrating sham needle can serve as an adequate sham control. We conducted a randomised, subject-blind, sham-controlled trial in both acupuncture-naïve and experienced healthy volunteers. Participants were randomly allocated to receive either real acupuncture (n=39) or non-penetrating sham acupuncture (n=40) on the hand (LI4), abdomen (CV12) and leg (ST36). The procedures were standardised and identical for both groups. Participants rated acupuncture sensations on a 10-point scale. A blinding index was calculated based on the participants' guesses on the type of acupuncture they had received (real, sham or do not know) for each acupuncture point. The association of knowledge about and experience in acupuncture with correct guessing was also examined. The subjects in both groups were similar with respect to age, gender, experience or knowledge about acupuncture. The sham needle tended to produce less penetration, pain and soreness only at LI4. Blinding appeared to be successfully achieved for ST36. Although 41% of participants in the real acupuncture group made correct guesses for LI4, 31% guessed incorrectly for CV12, beyond chance level. People with more experience and knowledge about acupuncture were more likely to correctly guess the type of needle they received at ST36 only, compared to that at the other points. A non-penetrating sham needle may successfully blind participants and thus, may be a credible sham control. However, the small sample size, the different needle sensations, and the degree and direction of unblinding across acupuncture points warrant further studies in Korea as well as other countries to confirm our finding. Our results also justify the incorporation of formal testing of the use of sham controls in clinical trials of acupuncture. Copyright © 2010 Elsevier Ltd. All rights reserved.
A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.
Demircan-Tureyen, Ezgi; Kamasak, Mustafa E
2015-01-01
Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.
Characterizing Feshbach resonances in ultracold scattering calculations
NASA Astrophysics Data System (ADS)
Frye, Matthew D.; Hutson, Jeremy M.
2017-10-01
We describe procedures for converging on and characterizing zero-energy Feshbach resonances that appear in scattering lengths for ultracold atomic and molecular collisions as a function of an external field. The elastic procedure is appropriate for purely elastic scattering, where the scattering length is real and displays a true pole. The regularized scattering length procedure is appropriate when there is weak background inelasticity, so that the scattering length is complex and displays an oscillation rather than a pole, but the resonant scattering length ares is close to real. The fully complex procedure is appropriate when there is substantial background inelasticity and the real and imaginary parts of ares are required. We demonstrate these procedures for scattering of ultracold 85Rb in various initial states. All of them can converge on and provide full characterization of resonances, from initial guesses many thousands of widths away, using scattering calculations at only about ten values of the external field.
Concentrations of Volatiles in the Lunar Regolith
NASA Technical Reports Server (NTRS)
Taylor, Jeff; Taylor, Larry; Duke, Mike
2007-01-01
To set lower and upper limits on the overall amounts and types of volatiles released during heating of polar regolith, we examined the data for equatorial lunar regolith and for the compositions of comets. The purpose, specifically, was to answer these questions: 1. Upper/Lower limits and 'best guess' for total amount of volatiles (by weight %) released from lunar regolith up to 150C 2. Upper/Lower limit and 'best guess' for composition of the volatiles released from the lunar regolith by weight %
Statistical Image Recovery From Laser Speckle Patterns With Polarization Diversity
2010-09-01
Fourier Transform is taken mapping the data to the pupil plane . The computed phase from this operation is multiplied to the amplitude of the pupil...guess generated by a uniform ran- dom number generator (−π to π). The guessed phase is multiplied to the measured amplitude in the image plane and the... plane data. Again, a Fourier transform is performed mapping the manipulated data set back to the image plane . The computed phase in this op- eration is
Tectonic predictions with mantle convection models
NASA Astrophysics Data System (ADS)
Coltice, Nicolas; Shephard, Grace E.
2018-04-01
Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough for an accurate prediction of instantaneous flow, but not for a prediction after 10 My of evolution. Therefore, inverse methods (sequential or data assimilation methods) using short-term fully dynamic evolution that predict surface kinematics are promising tools for a better understanding of the state of the Earth's mantle.
Retrospective Attention Gates Discrete Conscious Access to Past Sensory Stimuli.
Thibault, Louis; van den Berg, Ronald; Cavanagh, Patrick; Sergent, Claire
2016-01-01
Cueing attention after the disappearance of visual stimuli biases which items will be remembered best. This observation has historically been attributed to the influence of attention on memory as opposed to subjective visual experience. We recently challenged this view by showing that cueing attention after the stimulus can improve the perception of a single Gabor patch at threshold levels of contrast. Here, we test whether this retro-perception actually increases the frequency of consciously perceiving the stimulus, or simply allows for a more precise recall of its features. We used retro-cues in an orientation-matching task and performed mixture-model analysis to independently estimate the proportion of guesses and the precision of non-guess responses. We find that the improvements in performance conferred by retrospective attention are overwhelmingly determined by a reduction in the proportion of guesses, providing strong evidence that attracting attention to the target's location after its disappearance increases the likelihood of perceiving it consciously.
Elementary School Children’s Cheating Behavior and its Cognitive Correlates
Ding, Xiao Pan; Omrin, Danielle S.; Evans, Angela D.; Fu, Genyue; Chen, Guopeng; Lee, Kang
2014-01-01
Elementary school children’s cheating behavior and its cognitive correlates were investigated using a guessing game. Children (N = 95) between 8 and 12 years of age were asked to guess which side of the screen a coin would appear on and received rewards based on their self-reported accuracy. Children’s cheating behavior was measured by examining whether children failed to adhere to the game rules by falsely reporting their accuracy. Children’s theory-of-mind understanding and executive functioning skills were also assessed. The majority of children cheated during the guessing game, and cheating behavior decreased with age. Children with better working memory and inhibitory control were less likely to cheat. However, among the cheaters, those with greater cognitive flexibility use more tactics while cheating. Results revealed the unique role that executive functioning plays in children’s cheating behavior: Like a double-edged sword, executive functioning can inhibit children’s cheating behavior on the one side, while it can promote the sophistication of children’s cheating tactics on the other. PMID:24464240
Production and discrimination of facial expressions by preschool children.
Field, T M; Walden, T A
1982-10-01
Production and discrimination of the 8 basic facial expressions were investigated among 34 3-5-year-old preschool children. The children's productions were elicited and videotaped under 4 different prompt conditions (imitation of photographs of children's facial expressions, imitation of those in front of a mirror, imitation of those when given labels for the expressions, and when given only labels). Adults' "guesses" of the children's productions as well as the children's guesses of their own expressions on videotape were more accurate for the happy than afraid or angry expressions and for those expressions elicited during the imitation conditions. Greater accuracy of guessing by the adult than the child suggests that the children's productions were superior to their discriminations, although these skills appeared to be related. Children's production skills were also related to sociometric ratings by their peers and expressivity ratings by their teachers. These were not related to the child's age and only weakly related to the child's expressivity during classroom free-play observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
Use of mathematical decomposition to optimize investments in gas production and distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dougherty, E.L.; Lombardino, E.; Hutchinson, P.
1986-01-01
This paper presents an analytical approach based upon the decomposition method of mathematical programming for determining the optimal investment sequence in each year of a planning horizon for a group of reservoirs that produce gas and gas liquids through a trunk-line network and a gas processing plant. The paper describes the development of the simulation and investment planning system (SIPS) to perform the required calculations. Net present value (NPV) is maximized with the requirement that the incremental present value ratio (PWPI) of any investment in any reservoir be greater than a specified minimum value. A unique feature is a gasmore » reservoir simulation model that aids SIPS in evaluating field development investments. The optimal solution supplies specified dry gas offtake requirements through time until the remaining reserves are insufficient to meet requirements economically. The sales value of recovered liquids contributes significantly to NPV, while the required spare gas-producing capacity reduces NPV. Sips was used successfully for 4 years to generate annual investment plans and operating budgets, and to perform many special studies for a producing complex containing over 50 reservoirs. This experience is reviewed. In considering this large problem, SIPS converges to the optimal solution in 10 to 20 iterations. The primary factor that determines this number is how good the starting guess is. Although sips can generate a starting guess, beginning with a previous optimal solution ordinarily results in faster convergence. Computing time increases in proportion to the number of reservoirs because more than 90% of computing time is spent solving the, reservoir, subproblems.« less
Noise Adaptation and Correlated Maneuver Gating of an Extended Kalman Filter
1990-03-01
ay. 89 - (V) 2 + VY2,,o (A.11) v t E(A21 Y )(V’~2 + V 20r8 2 (A. 12) We also find that the covariance of ax and ay is E[axay] - E[ aya j - Vic) -Y G... Diary -- YES eval([’ diary ’,MatFilename]); end; 108 % System Information, in continuous time, for target, and initial guess A=[0 1 00 % x 0000 % xdot 000...Cleanup diary off; 114 D. KFBR.M function [Z,GI,Res,Pxy,xe] = kfbr(xobs,zobs,xO,IE,A,B,W,V,T,MD,P0,RAI,IL,IT,SG) * %KF BR % [Z,GI,Res,Pxy,xe] - kf__br
Flight Research: Problems Encountered and What They Should Teach Us
NASA Technical Reports Server (NTRS)
Thompson, Milton O.; Hunley, J. D.; Launius, Roger (Technical Monitor)
2000-01-01
The document by Milt Thompson that is reproduced here was an untitled rough draft found in Thompson's papers in the Dryden Historical Reference Collection. Internal evidence suggests that it was written around 1974. I have not attempted to second guess what Milt might have done in revising the paper, but I have made some minor stylistic changes to make it more readable without changing the sense of what Milt initially wrote. For the most part, I have not attempted to bring his comments up to date. For readers who may not be familiar with the history of what is today the NASA Dryden Flight Research Center and of its predecessor organizations, I have added a background section.
Exploration Opportunity Search of Near-earth Objects Based on Analytical Gradients
NASA Astrophysics Data System (ADS)
Ren, Yuan; Cui, Ping-Yuan; Luan, En-Jie
2008-07-01
The problem of search of opportunity for the exploration of near-earth minor objects is investigated. For rendezvous missions, the analytical gradients of the performance index with respect to the free parameters are derived using the variational calculus and the theory of state-transition matrix. After generating randomly some initial guesses in the search space, the performance index is optimized, guided by the analytical gradients, leading to the local minimum points representing the potential launch opportunities. This method not only keeps the global-search property of the traditional method, but also avoids the blindness in the latter, thereby increasing greatly the computing speed. Furthermore, with this method, the searching precision could be controlled effectively.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, B.; Polizzi, E.
2013-05-01
The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.
Steady axisymmetric vortex flows with swirl and shear
NASA Astrophysics Data System (ADS)
Elcrat, Alan R.; Fornberg, Bengt; Miller, Kenneth G.
A general procedure is presented for computing axisymmetric swirling vortices which are steady with respect to an inviscid flow that is either uniform at infinity or includes shear. We consider cases both with and without a spherical obstacle. Choices of numerical parameters are given which yield vortex rings with swirl, attached vortices with swirl analogous to spherical vortices found by Moffatt, tubes of vorticity extending to infinity and Beltrami flows. When there is a spherical obstacle we have found multiple solutions for each set of parameters. Flows are found by numerically solving the Bragg-Hawthorne equation using a non-Newton-based iterative procedure which is robust in its dependence on an initial guess.
Resonance transition periodic orbits in the circular restricted three-body problem
NASA Astrophysics Data System (ADS)
Lei, Hanlun; Xu, Bo
2018-04-01
This work studies a special type of cislunar periodic orbits in the circular restricted three-body problem called resonance transition periodic orbits, which switch between different resonances and revolve about the secondary with multiple loops during one period. In the practical computation, families of multiple periodic orbits are identified first, and then the invariant manifolds emanating from the unstable multiple periodic orbits are taken to generate resonant homoclinic connections, which are used to determine the initial guesses for computing the desired periodic orbits by means of multiple-shooting scheme. The obtained periodic orbits have potential applications for the missions requiring long-term continuous observation of the secondary and tour missions in a multi-body environment.
Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.
2016-01-01
The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.
Chan, Alan H S; Chan, Ken W L
2013-02-01
To examine the associations between the guessing performance of 25 pharmaceutical pictograms and five sign features for naïve participants. The effect of prospective-user factors on guessing performance was also investigated. A total of 160 Hong Kong Chinese people, drawn largely from a young student population, guessed the meanings of 25 pharmaceutical pictograms that were generally not familiar to them. Participants then completed a questionnaire about their drug buying and drug label reading habits, and their demographics and medication history. Finally they rated five features (familiarity, concreteness, complexity, meaningfulness, and semantic distance) of the pharmaceutical pictograms using 0-100 scales. For all pharmaceutical pictograms, mean and standard deviation of guessability score were 64.8 and 17.1, respectively. Prospective-user factors of 'occupation', 'age' and 'education level' significantly affected guessing performance. For sign features, semantic closeness was the best predictor of guessability score, followed by simplicity, concreteness, meaningfulness and familiarity. User characteristics and sign features are critical for pharmaceutical pictograms. To be effective, pharmaceutical pictograms should have obvious and direct connections with familiar things and it is recommended that pharmaceutical pictograms should be designed with consideration of the five sign features investigated here. This study provides useful information and recommendations to assist interface designers to create and evaluate icons for pharmaceutical products and to design more user-friendly pharmaceutical pictograms. However, further work is needed to see how older people respond to such pharmaceutical pictograms. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
1991-01-30
states that continual education and training at all levels of the company is the most important element in enabling companies to gain competitive...staked on information known to be inaccurate and educated guesses from the same people who provided much of the original inaccurate information. The second... educated guesses. 7.1.2.6 Implementation Cost/Schedule Refer to Paragraph 7.1.1.6. 7.1-8 TASK ORDER NO. 18 PROCESS CHARACTERIZATION SCHEDULER RECEIVES ITEM
Autonomous Adaptation and Collaboration of Unmanned Vehicles for Tracking Submerged Contacts
2012-06-01
filter: CRS RANGE REPORT =”name=archie,range=23.4,target= jackal ,time=2342551.213” • Line 8: ping wait is the time delay between range pulses. • Line 13: rn...uFldContactRangeSensor Settings 1: ProcessConfig = uFldContactRangeSensor 2: { 3: AppTick = 4 4: CommsTick = 4 5: 6: reply distance = jackal = 50 7: reach distance...REPORT = CRS RANGE REPORT 8: MY SHIP = archie 9: MY FRIEND = betty 10: MY CONTACT = jackal 11: MY BEST GUESS = besttarget 12: MY AVG GUESS = avgtarget 13
Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra
NASA Astrophysics Data System (ADS)
Khamatnurova, Marina; Gribanov, Konstantin
2016-04-01
Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int
Uniscale multi-view registration using double dog-leg method
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan
2009-02-01
3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.
Helping without harming: the instructor's feedback dilemma in debriefing--a case study.
Rudolph, Jenny W; Foldy, Erica Gabrielle; Robinson, Traci; Kendall, Sandy; Taylor, Steven S; Simon, Robert
2013-10-01
Simulation instructors often feel caught in a task-versus-relationship dilemma. They must offer clear feedback on learners' task performance without damaging their relationship with those learners, especially in formative simulation settings. Mastering the skills to resolve this dilemma is crucial for simulation faculty development. We conducted a case study of a debriefer stuck in this task-versus-relationship dilemma. The "2-column case" captures debriefing dialogue and instructor's thoughts and feelings or the "subjective experience." The "learning pathways grid" guides a peer group of faculty in a step-by-step, retrospective analysis of the debriefing. The method uses vivid language to highlight the debriefer's dilemmas and how to surmount them. The instructor's initial approach to managing the task-versus-relationship dilemma included (1) assuming that honest critiques will damage learners, (2) using vague descriptions of learner actions paired with guess-what-I-am-thinking questions, and (3) creating a context she worried would leave learners feeling neither safe nor clear how they could improve. This case study analysis identified things the instructor could do to be more effective including (1) making generous inferences about the learners' qualities, (2) normalizing the challenges posed by the simulation, (3) assuming there are different understandings of what it means to be a team. There are key assumptions and ways of interacting that help instructors resolve the task-versus-relationship dilemma. The instructor can then provide honest feedback in a rigorous yet empathic way to help sustain good or improve suboptimal performance in the future.
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
σ-SCF: A direct energy-targeting method to mean-field excited states
NASA Astrophysics Data System (ADS)
Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy
2017-12-01
The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.
σ-SCF: A direct energy-targeting method to mean-field excited states.
Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy
2017-12-07
The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.
Medicine is not science: guessing the future, predicting the past.
Miller, Clifford
2014-12-01
Irregularity limits human ability to know, understand and predict. A better understanding of irregularity may improve the reliability of knowledge. Irregularity and its consequences for knowledge are considered. Reliable predictive empirical knowledge of the physical world has always been obtained by observation of regularities, without needing science or theory. Prediction from observational knowledge can remain reliable despite some theories based on it proving false. A naïve theory of irregularity is outlined. Reducing irregularity and/or increasing regularity can increase the reliability of knowledge. Beyond long experience and specialization, improvements include implementing supporting knowledge systems of libraries of appropriately classified prior cases and clinical histories and education about expertise, intuition and professional judgement. A consequence of irregularity and complexity is that classical reductionist science cannot provide reliable predictions of the behaviour of complex systems found in nature, including of the human body. Expertise, expert judgement and their exercise appear overarching. Diagnosis involves predicting the past will recur in the current patient applying expertise and intuition from knowledge and experience of previous cases and probabilistic medical theory. Treatment decisions are an educated guess about the future (prognosis). Benefits of the improvements suggested here are likely in fields where paucity of feedback for practitioners limits development of reliable expert diagnostic intuition. Further analysis, definition and classification of irregularity is appropriate. Observing and recording irregularities are initial steps in developing irregularity theory to improve the reliability and extent of knowledge, albeit some forms of irregularity present inherent difficulties. © 2014 John Wiley & Sons, Ltd.
Kappa Group: The initial guess. A proposal in response to a commercial air transportation study
NASA Technical Reports Server (NTRS)
1991-01-01
Kappa Aerospace presents their Aeroworld Aircraft, the Initial Guess (IG). This aircraft is designed to generate profit in the market which is currently controlled by the train and boat industry. The main priority of the design team was to develop an extremely efficient aircraft that could be sold at a reasonable price. The IG offers a quick and safe alternative to the existing means of transportation at a competitive price. The cruise velocity of 28 ft/sec. allows all flights to be between 20 and 45 minutes, which is a remarkable savings in time compared to travel by boat or train. The IG is propelled by a single Astro-05 engine with a Zinger 10-6 propeller. The Astro-05 is not an extremely powerful engine; however, it provides enough thrust to meet the design and safety requirements. The major advantage of the Astro-05 is that it is the most efficient engine available. The fuel efficiency of the Astro-05 is what puts the aircraft ahead of the competition. The money saved on an efficient engine can be passed on as lower ticket prices or increased revenue. The IG has a payload of 56 passengers and a wingspan of 7 ft. The 7 ft. wingspan allows the aircraft to fit into the gates of all of the cities that are targeted. Future endeavors of Kappa Aerospace will include fitting a stretch version of the IG with a larger propulsion system. This derivative aircraft will be able to carry more passengers and will be placed on the routes which have the greatest demand for travel. The fuselage and empennage are made of a wooden truss configuration, while the wing is made of a rib/spare configuration. The stress carrying elements are made of spruce, the nonstress carrying elements are made of balsa. The wing is removable for easy access into the fuselage. The easy access to the batteries will keep maintenance costs down.
LPJ-GUESS Simulated North America Vegetation for 21-0 ka Using the TraCE-21ka Climate Simulation
NASA Astrophysics Data System (ADS)
Shafer, S. L.; Bartlein, P. J.
2016-12-01
Transient climate simulations that span multiple millennia (e.g., TraCE-21ka) have become more common as computing power has increased, allowing climate models to complete long simulations in relatively short periods of time (i.e., months). These climate simulations provide information on the potential rate, variability, and spatial expression of past climate changes. They also can be used as input data for other environmental models to simulate transient changes for different components of paleoenvironmental systems, such as vegetation. Long, transient paleovegetation simulations can provide information on a range of ecological processes, describe the spatial and temporal patterns of changes in species distributions, and identify the potential locations of past species refugia. Paleovegetation simulations also can be used to fill in spatial and temporal gaps in observed paleovegetation data (e.g., pollen records from lake sediments) and to test hypotheses of past vegetation change. We used the TraCE-21ka transient climate simulation for 21-0 ka from CCSM3, a coupled atmosphere-ocean general circulation model. The TraCE-21ka simulated temperature, precipitation, and cloud data were regridded onto a 10-minute grid of North America. These regridded climate data, along with soil data and atmospheric carbon dioxide concentrations, were used as input to LPJ-GUESS, a general ecosystem model, to simulate North America vegetation from 21-0 ka. LPJ-GUESS simulates many of the processes controlling the distribution of vegetation (e.g., competition), although some important processes (e.g., dispersal) are not simulated. We evaluate the LPJ-GUESS-simulated vegetation (in the form of plant functional types and biomes) for key time periods and compare the simulated vegetation with observed paleovegetation data, such as data archived in the Neotoma Paleoecology Database. In general, vegetation simulated by LPJ-GUESS reproduces the major North America vegetation patterns (e.g., forest, grassland) with regional areas of disagreement between simulated and observed vegetation. We describe the regions and time periods with the greatest data-model agreement and disagreement, and discuss some of the strengths and weaknesses of both the simulated climate and simulated vegetation data.
Loss of information in quantum guessing game
NASA Astrophysics Data System (ADS)
Plesch, Martin; Pivoluska, Matej
2018-02-01
Incompatibility of certain measurements—impossibility of obtaining deterministic outcomes simultaneously—is a well known property of quantum mechanics. This feature can be utilized in many contexts, ranging from Bell inequalities to device dependent QKD protocols. Typically, in these applications the measurements are chosen from a predetermined set based on a classical random variable. One can naturally ask, whether the non-determinism of the outcomes is due to intrinsic hiding property of quantum mechanics, or rather by the fact that classical, incoherent information entered the system via the choice of the measurement. Authors Rozpedek et al (2017 New J. Phys. 19 023038) examined this question for a specific case of two mutually unbiased measurements on systems of different dimensions. They have somewhat surprisingly shown that in case of qubits, if the measurements are chosen coherently with the use of a controlled unitary, outcomes of both measurements can be guessed deterministically. Here we extend their analysis and show that specifically for qubits, measurement result for any set of measurements with any a priori probability distribution can be faithfully guessed by a suitable state preparation and measurement. We also show that up to a small set of specific cases, this is not possible for higher dimensions. This result manifests a deep difference in properties of qubits and higher dimensional systems and suggests that these systems might offer higher security in specific cryptographic protocols. More fundamentally, the results show that the impossibility of predicting a result of a measurement is not caused solely by a loss of coherence between the choice of the measurement and the guessing procedure.
Trending in Pc Measurements via a Bayesian Zero-Inflated Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matthew; Stamey, James
2015-01-01
Two satellites predicted to come within close proximity of one another, usually a high-value satellite and a piece of space debris moving the active satellite is a means of reducing collision risk but reduces satellite lifetime, perturbs satellite mission, and introduces its own risks. So important to get a good statement of the risk of collision in order to determine whether a maneuver is truly necessary. Two aspects of this Calculation of the Probability of Collision (Pc) based on the most recent set of position velocity and uncertainty data for both satellites. Examination of the changes in the Pc value as the event develops. Events should follow a canonical development (Pc vs time to closest approach (TCA)). Helpful to be able to guess where the present data point fits in the canonical development in order to guide operational response.
Communicating Uncertain News in Cancer Consultations.
Alby, Francesca; Zucchermaglio, Cristina; Fatigante, Marilena
2017-12-01
In cancer communication, most of the literature is in the realm of delivering bad news while much less attention has been given to the communication of uncertain news around the diagnosis and the possible outcomes of the illness. Drawing on video-recorded cancer consultations collected in two Italian hospitals, this article analyzes three communication practices used by oncologists to interactionally manage the uncertainty during the visit: alternating between uncertain bad news and certain good news, anticipating scenarios, and guessing test results. Both diagnostic and personal uncertainties are not hidden to the patient, yet they are reduced through these practices. Such communication practices are present in 32 % of the visits in the data set, indicating that the interactional management of uncertainty is a relevant phenomenon in oncological encounters. Further studies are needed to improve both its understanding and its teaching.
Elementary school children's cheating behavior and its cognitive correlates.
Ding, Xiao Pan; Omrin, Danielle S; Evans, Angela D; Fu, Genyue; Chen, Guopeng; Lee, Kang
2014-05-01
Elementary school children's cheating behavior and its cognitive correlates were investigated using a guessing game. Children (n=95) between 8 and 12 years of age were asked to guess which side of the screen a coin would appear on and received rewards based on their self-reported accuracy. Children's cheating behavior was measured by examining whether children failed to adhere to the game rules by falsely reporting their accuracy. Children's theory-of-mind understanding and executive functioning skills were also assessed. The majority of children cheated during the guessing game, and cheating behavior decreased with age. Children with better working memory and inhibitory control were less likely to cheat. However, among the cheaters, those with greater cognitive flexibility use more tactics while cheating. Results revealed the unique role that executive functioning plays in children's cheating behavior: Like a double-edged sword, executive functioning can inhibit children's cheating behavior, on the one hand, while it can promote the sophistication of children's cheating tactics, on the other. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Logical synchronization: how evidence and hypotheses steer atomic clocks
NASA Astrophysics Data System (ADS)
Myers, John M.; Madjid, F. Hadi
2014-05-01
A clock steps a computer through a cycle of phases. For the propagation of logical symbols from one computer to another, each computer must mesh its phases with arrivals of symbols from other computers. Even the best atomic clocks drift unforeseeably in frequency and phase; feedback steers them toward aiming points that depend on a chosen wave function and on hypotheses about signal propagation. A wave function, always under-determined by evidence, requires a guess. Guessed wave functions are coded into computers that steer atomic clocks in frequency and position—clocks that step computers through their phases of computations, as well as clocks, some on space vehicles, that supply evidence of the propagation of signals. Recognizing the dependence of the phasing of symbol arrivals on guesses about signal propagation elevates `logical synchronization.' from its practice in computer engineering to a dicipline essential to physics. Within this discipline we begin to explore questions invisible under any concept of time that fails to acknowledge the unforeseeable. In particular, variation of spacetime curvature is shown to limit the bit rate of logical communication.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Ziaei, Vafa; Bredow, Thomas
2017-11-01
We propose a simple many-body based screening mixing strategy to considerably enhance the performance of the Bethe-Salpeter equation (BSE) approach for prediction of excitation energies of molecular systems. This strategy enables us to closely reproduce results of highly correlated equation of motion coupled cluster singles and doubles (EOM-CCSD) through optimal use of cancellation effects. We start from the Hartree-Fock (HF) reference state and take advantage of local density approximation (LDA) based random phase approximation (RPA) screening, denoted as W0-RPA@LDA with W0 as the dynamically screened interaction built upon LDA wave functions and energies. We further use this W0-RPA@LDA screening as an initial screening guess for calculation of quasiparticle energies in the framework of G0W0 @HF. The W0-RPA@LDA screening is further injected into the BSE. By applying such an approach on a set of 22 molecules for which the traditional G W /BSE approaches fail, we observe good agreement with respect to EOM-CCSD references. The reason for the observed good accuracy of this mixing ansatz (scheme A) lies in an optimal damping of HF exchange effect through the W0-RPA@LDA strong screening, leading to substantial decrease of typically overestimated HF electronic gap, and hence to better excitation energies. Further, we present a second multiscreening ansatz (scheme B), which is similar to scheme A with the exception that now the W0-RPA@HF screening is used in the BSE in order to further improve the overestimated excitation energies of carbonyl sulfide (COS) and disilane (Si2H6 ). The reason for improvement of the excitation energies in scheme B lies in the fact that W0-RPA@HF screening is less effective (and weaker than W0-RPA@LDA), which gives rise to stronger electron-hole effects in the BSE.
NASA Technical Reports Server (NTRS)
Bailey, Harry E.; Beam, Richard M.
1991-01-01
Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.
Automated aberration correction of arbitrary laser modes in high numerical aperture systems.
Hering, Julian; Waller, Erik H; Von Freymann, Georg
2016-12-12
Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.
On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN
NASA Astrophysics Data System (ADS)
Patriarchi, P.; Perinotto, M.
The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.
Vision based speed breaker detection for autonomous vehicle
NASA Astrophysics Data System (ADS)
C. S., Arvind; Mishra, Ritesh; Vishal, Kumar; Gundimeda, Venugopal
2018-04-01
In this paper, we are presenting a robust and real-time, vision-based approach to detect speed breaker in urban environments for autonomous vehicle. Our method is designed to detect the speed breaker using visual inputs obtained from a camera mounted on top of a vehicle. The method performs inverse perspective mapping to generate top view of the road and segment out region of interest based on difference of Gaussian and median filter images. Furthermore, the algorithm performs RANSAC line fitting to identify the possible speed breaker candidate region. This initial guessed region via RANSAC, is validated using support vector machine. Our algorithm can detect different categories of speed breakers on cement, asphalt and interlock roads at various conditions and have achieved a recall of 0.98.
Evaluative understanding and role-taking ability: a comparison of deaf and hearing children.
Kusché, C A; Greenberg, M T
1983-02-01
The purposes of this study were (1) to evaluate the growth of social-cognitive knowledge in deaf and hearing children during the early and middle school years and (2) to assess the relative importance of language in 2 domains of social cognition. This study separately examined the child's ability to (1) evaluate the concepts of good and bad and (2) take another person's perspective. Subjects consisted of 30 deaf and 30 hearing children divided into 3 developmental levels (52 months, 74 months, and 119 months old). For the good/bad evaluation test, each child was shown 12 sets of multiple-choice pictures. Each set had 4 alternatives, which included 1 good, 1 bad, or all neutral activities. Role-taking ability was evaluated through the child's choice of strategy in a binary-choice hiding/guessing game. The results showed that deaf children evidence a developmental delay in the understanding of the concepts of good and bad. With regard to role-taking ability, there appears to be a developmental delay with young deaf children, which is no longer apparent by the age of 6. The assumption of egocentrism in school-age deaf children frequently found in the literature thus appears to be misleading. It is not that these deaf children are unable to take another person's perspective, but rather that they are delayed in evaluative understanding. The results suggest that language is of varying importance in differing domains of social and personality development.
Feedback-related brain activity predicts learning from feedback in multiple-choice testing.
Ernst, Benjamin; Steinhauser, Marco
2012-06-01
Different event-related potentials (ERPs) have been shown to correlate with learning from feedback in decision-making tasks and with learning in explicit memory tasks. In the present study, we investigated which ERPs predict learning from corrective feedback in a multiple-choice test, which combines elements from both paradigms. Participants worked through sets of multiple-choice items of a Swahili-German vocabulary task. Whereas the initial presentation of an item required the participants to guess the answer, corrective feedback could be used to learn the correct response. Initial analyses revealed that corrective feedback elicited components related to reinforcement learning (FRN), as well as to explicit memory processing (P300) and attention (early frontal positivity). However, only the P300 and early frontal positivity were positively correlated with successful learning from corrective feedback, whereas the FRN was even larger when learning failed. These results suggest that learning from corrective feedback crucially relies on explicit memory processing and attentional orienting to corrective feedback, rather than on reinforcement learning.
NASA Astrophysics Data System (ADS)
Xu, Xue-song
2014-12-01
Under complex currents, the motion governing equations of marine cables are complex and nonlinear, and the calculations of cable configuration and tension become difficult compared with those under the uniform or simple currents. To obtain the numerical results, the usual Newton-Raphson iteration is often adopted, but its stability depends on the initial guessed solution to the governing equations. To improve the stability of numerical calculation, this paper proposed separated the particle swarm optimization, in which the variables are separated into several groups, and the dimension of search space is reduced to facilitate the particle swarm optimization. Via the separated particle swarm optimization, these governing nonlinear equations can be solved successfully with any initial solution, and the process of numerical calculation is very stable. For the calculations of cable configuration and tension of marine cables under complex currents, the proposed separated swarm particle optimization is more effective than the other particle swarm optimizations.
Bandinelli, Francesca; Milla, Monica; Genise, Stefania; Giovannini, Leonardo; Bagnoli, Siro; Candelieri, Antonio; Collaku, Ledio; Biagini, Silvia; Cerinic, Marco Matucci
2011-07-01
To investigate the presence of lower limb entheseal abnormalities in IBD patients without clinical signs and symptoms of SpA and their correlation with IBD clinical variables. A total of 81 IBD patients [55 Crohn's disease (CD) and 26 ulcerative colitis (UC), 43 females and 38 males, mean age 41.3 (12.4) years, BMI 24 (2)] with low active (12) and inactive (67) disease were consecutively studied with US (LOGIQ5 General Electric 10-MHz linear array transducer) of lower limb entheses and compared with 40 healthy controls matched for sex, age and BMI. Quadriceps, patellar, Achilleon and plantar fascia entheses were scored according to the 0-36 Glasgow Ultrasound Enthesitis Scoring System (GUESS) and power Doppler (PD). Correlations of GUESS and PD with IBD features [duration, type (CD/UC) and activity (disease activity index for CD/Truelove score for UC)] were investigated. The intra- and inter-reader agreements for US were estimated in all images detected in patients and controls. Of the 81 patients, 71 (92.6%) presented almost one tendon alteration with mean GUESS 5.1 (3.5): 81.5% thickness (higher than controls P < 0.05), 67.9% enthesophytosis, 27.1% bursitis and 16.1% erosions. PD was positive in 13/81 (16%) patients. In controls, US showed only enthesophytes (5%) and no PD. GUESS and PD were independent of duration, activity or type (CD/UC) of IBD. The intra- and inter-reader agreements were high (>0.9 intra-class correlation variability). US entheseal abnormalities are present in IBD patients without clinical signs and symptoms of SpA. US enthesopathy is independent of activity, duration and type of gut disease.
Durning, Steven J; Graner, John; Artino, Anthony R; Pangaro, Louis N; Beckman, Thomas; Holmboe, Eric; Oakes, Terrance; Roy, Michael; Riedy, Gerard; Capaldi, Vincent; Walter, Robert; van der Vleuten, Cees; Schuwirth, Lambert
2012-09-01
Clinical reasoning is essential to medical practice, but because it entails internal mental processes, it is difficult to assess. Functional magnetic resonance imaging (fMRI) and think-aloud protocols may improve understanding of clinical reasoning as these methods can more directly assess these processes. The objective of our study was to use a combination of fMRI and think-aloud procedures to examine fMRI correlates of a leading theoretical model in clinical reasoning based on experimental findings to date: analytic (i.e., actively comparing and contrasting diagnostic entities) and nonanalytic (i.e., pattern recognition) reasoning. We hypothesized that there would be functional neuroimaging differences between analytic and nonanalytic reasoning theory. 17 board-certified experts in internal medicine answered and reflected on validated U.S. Medical Licensing Exam and American Board of Internal Medicine multiple-choice questions (easy and difficult) during an fMRI scan. This procedure was followed by completion of a formal think-aloud procedure. fMRI findings provide some support for the presence of analytic and nonanalytic reasoning systems. Statistically significant activation of prefrontal cortex distinguished answering incorrectly versus correctly (p < 0.01), whereas activation of precuneus and midtemporal gyrus distinguished not guessing from guessing (p < 0.01). We found limited fMRI evidence to support analytic and nonanalytic reasoning theory, as our results indicate functional differences with correct vs. incorrect answers and guessing vs. not guessing. However, our findings did not suggest one consistent fMRI activation pattern of internal medicine expertise. This model of employing fMRI correlates offers opportunities to enhance our understanding of theory, as well as improve our teaching and assessment of clinical reasoning, a key outcome of medical education.
NASA Astrophysics Data System (ADS)
Morrow, Rosemary; de Mey, Pierre
1995-12-01
The flow characteristics in the region of the Azores Current are investigated by assimilating TOPEX/POSEIDON and ERS 1 altimeter data into the multilevel Harvard quasigeostrophic (QG) model with open boundaries (Miller et al., 1983) using an adjoint variational scheme (Moore, 1991). The study site lies in the path of the Azores Current, where a branch retroflects to the south in the vicinity of the Madeira Rise. The region was the site of an intensive field program in 1993, SEMAPHORE. We had two main aims in this adjoint assimilation project. The first was to see whether the adjoint method could be applied locally to optimize an initial guess field, derived from the continous assimilation of altimetry data using optimal interpolation (OI). The second aim was to assimilate a variety of different data sets and evaluate their importance in constraining our QG model. The adjoint assimilation of surface data was effective in optimizing the initial conditions from OI. After 20 iterations the cost function was generally reduced by 50-80%, depending on the chosen data constraints. The primary adjustment process was via the barotropic mode. Altimetry proved to be a good constraint on the variable flow field, in particular, for constraining the barotropic field. The excellent data quality of the TOPEX/POSEIDON (T/P) altimeter data provided smooth and reliable forcing; but for our mesoscale study in a region of long decorrelation times O(30 days), the spatial coverage from the combined T/P and ERS 1 data sets was more important for constraining the solution and providing stable flow at all levels. Surface drifters provided an excellent constraint on both the barotropic and baroclinic model fields. More importantly, the drifters provided a reliable measure of the mean field. Hydrographic data were also applied as a constraint; in general, hydrography provided a weak but effective constraint on the vertical Rossby modes in the model. Finally, forecasts run over a 2-month period indicate that the initial conditions optimized by the 20-day adjoint assimilation provide more stable, longer-term forecasts.
Being Sherlock Holmes: Can we sense empathy from a brief sample of behaviour?
Wu, Wenjie; Sheppard, Elizabeth; Mitchell, Peter
2016-02-01
Mentalizing (otherwise known as 'theory of mind') involves a special process that is adapted for predicting and explaining the behaviour of others (targets) based on inferences about targets' beliefs and character. This research investigated how well participants made inferences about an especially apposite aspect of character, empathy. Participants were invited to make inferences of self-rated empathy after watching or listening to an unfamiliar target for a few seconds telling a scripted joke (or answering questions about him/herself or reading aloud a paragraph of promotional material). Across three studies, participants were good at identifying targets with low and high self-rated empathy but not good at identifying those who are average. Such inferences, especially of high self-rated empathy, seemed to be based mainly on clues in the target's behaviour, presented either in a video, a still photograph or in an audio track. However, participants were not as effective in guessing which targets had low or average self-rated empathy from a still photograph showing a neutral pose or from an audio track. We conclude with discussion of the scope and the adaptive value of this inferential ability. © 2016 The British Psychological Society.
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew
2016-01-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach. PMID:27497546
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu; Sun, Hui; Cheng, Li-Tien
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. Wemore » also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach.« less
Hansen, Scott K.; Vesselinov, Velimir Valentinov
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less
Optimization of Low-Thrust Spiral Trajectories by Collocation
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Dankanich, John W.
2012-01-01
As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.
SASS wind ambiguity removal by direct minimization. [Seasat-A satellite scatterometer
NASA Technical Reports Server (NTRS)
Hoffman, R. N.
1982-01-01
An objective analysis procedure is presented which combines Seasat-A satellite scatterometer (SASS) data with other available data on wind speeds by minimizing an objective function of gridded wind speed values. The functions are defined as the loss functions for the SASS velocity data, the forecast, the SASS velocity magnitude data, and conventional wind speed data. Only aliases closest to the analysis were included, and a method for improving the first guess while using a minimization technique and slowly changing the parameters of the problem is introduced. The model is employed to predict the wind field for the North Atlantic on Sept. 10, 1978. Dealiased SASS data is compared with available ship readings, showing good agreement between the SASS dealiased winds and the winds measured at the surface. Expansion of the model to take in low-level cloud measurements, pressure data, and convergence and cloud level data correlations is discussed.
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
Password selection has long been a difficult issue; traditionally, passwords are either assigned by the computer or chosen by the user. When the computer does the assignment, the passwords are often hard to remember; when the user makes the selection, the passwords are often easy to guess. This paper describes a technique, and a mechanism, to allow users to select passwords which to them are easy to remember but to others would be very difficult to guess. The technique is site, user, and group compatible, and allows rapid changing of constraints imposed upon the password. Although experience with this technique is limited, it appears to have much promise.
Quantum gambling using two nonorthogonal states
NASA Astrophysics Data System (ADS)
Hwang, Won Young; Ahn, Doyeol; Hwang, Sung Woo
2001-12-01
We give a (remote) quantum-gambling scheme that makes use of the fact that quantum nonorthogonal states cannot be distinguished with certainty. In the proposed scheme, two participants Alice and Bob can be regarded as playing a game of making guesses on identities of quantum states that are in one of two given nonorthogonal states: if Bob makes a correct (an incorrect) guess on the identity of a quantum state that Alice has sent, he wins (loses). It is shown that the proposed scheme is secure against the nonentanglement attack. It can also be shown heuristically that the scheme is secure in the case of the entanglement attack.
An adaptive procedure for defect identification problems in elasticity
NASA Astrophysics Data System (ADS)
Gutiérrez, Sergio; Mura, J.
2010-07-01
In the context of inverse problems in mechanics, it is well known that the most typical situation is that neither the interior nor all the boundary is available to obtain data to detect the presence of inclusions or defects. We propose here an adaptive method that uses loads and measures of displacements only on part of the surface of the body, to detect defects in the interior of an elastic body. The method is based on Small Amplitude Homogenization, that is, we work under the assumption that the contrast on the values of the Lamé elastic coefficients between the defect and the matrix is not very large. The idea is that given the data for one loading state and one location of the displacement sensors, we use an optimization method to obtain a guess for the location of the inclusion and then, using this guess, we adapt the position of the sensors and the loading zone, hoping to refine the current guess. Numerical results show that the method is quite efficient in some cases, using in those cases no more than three loading positions and three different positions of the sensors.
Peer norm guesses and self-reported attitudes towards performance-related pay.
Georgantzis, Nikolaos; Vasileiou, Efi; Kotzaivazoglou, Iordanis
2017-01-01
Due to a variety of reasons, people see themselves differently from how they see others. This basic asymmetry has broad consequences. It leads people to judge themselves and their own behavior differently from how they judge others and others' behavior. This research, first, studies the perceptions and attitudes of Greek Public Sector employees towards the introduction of Performance-Related Pay (PRP) systems trying to reveal whether there is a divergence between individual attitudes and guesses on peers' attitudes. Secondly, it is investigated whether divergence between own self-reported and peer norm guesses could mediate the acceptance of the aforementioned implementation once job status has been controlled for. This study uses a unique questionnaire of 520 observations which was designed to address the questions outlined in the preceding lines. Our econometric results indicate that workers have heterogeneous attitudes and hold heterogeneous beliefs on others' expectations regarding a successful implementation of PRP. Specifically, individual perceptions are less skeptical towards PRP than are beliefs on others' attitudes. Additionally, we found that managers are significantly more optimistic than lower rank employees regarding the expected success of PRP systems in their jobs. However, they both expect their peers to be more negative than they themselves are.
Peer norm guesses and self-reported attitudes towards performance-related pay
Vasileiou, Efi; Kotzaivazoglou, Iordanis
2017-01-01
Due to a variety of reasons, people see themselves differently from how they see others. This basic asymmetry has broad consequences. It leads people to judge themselves and their own behavior differently from how they judge others and others’ behavior. This research, first, studies the perceptions and attitudes of Greek Public Sector employees towards the introduction of Performance-Related Pay (PRP) systems trying to reveal whether there is a divergence between individual attitudes and guesses on peers’ attitudes. Secondly, it is investigated whether divergence between own self-reported and peer norm guesses could mediate the acceptance of the aforementioned implementation once job status has been controlled for. This study uses a unique questionnaire of 520 observations which was designed to address the questions outlined in the preceding lines. Our econometric results indicate that workers have heterogeneous attitudes and hold heterogeneous beliefs on others’ expectations regarding a successful implementation of PRP. Specifically, individual perceptions are less skeptical towards PRP than are beliefs on others’ attitudes. Additionally, we found that managers are significantly more optimistic than lower rank employees regarding the expected success of PRP systems in their jobs. However, they both expect their peers to be more negative than they themselves are. PMID:28414737
Initial value problem of space dynamics in universal Stumpff anomaly
NASA Astrophysics Data System (ADS)
Sharaf, M. A.; Dwidar, H. R.
2018-05-01
In this paper, the initial value problem of space dynamics in universal Stumpff anomaly ψ is set up and developed in analytical and computational approach. For the analytical expansions, the linear independence of the functions U_{j} (ψ;σ); {j=0,1,2,3} are proved. The differential and recurrence equations satisfied by them and their relations with the elementary functions are given. The universal Kepler equation and its validations for different conic orbits are established together with the Lagrangian coefficients. Efficient representations of these functions are developed in terms of the continued fractions. For the computational developments we consider the following items: 1.
Mass breakdown model of solar-photon sail shuttle: The case for Mars
NASA Astrophysics Data System (ADS)
Vulpetti, Giovanni; Circi, Christian
2016-02-01
The main aim of this paper is to set up a many-parameter model of mass breakdown to be applied to a reusable Earth-Mars-Earth solar-photon sail shuttle, and analyze the system behavior in two sub-problems: (1) the zero-payload shuttle, and (2) given the sailcraft sail loading and the gross payload mass, find the sail area of the shuttle. The solution to the subproblem-1 is of technological and programmatic importance. The general analysis of subproblem-2 is presented as a function of the sail side length, system mass, sail loading and thickness. In addition to the behaviors of the main system masses, useful information for future work on the sailcraft trajectory optimization is obtained via (a) a detailed mass model for the descent/ascent Martian Excursion Module, and (b) the fifty-fifty solution to the sailcraft sail loading breakdown equation. Of considerable importance is the evaluation of the minimum altitude for the rendezvous between the ascent rocket vehicle and the solar-photon sail propulsion module, a task performed via the Mars Climate Database 2014-2015. The analysis shows that such altitude is 300 km; below it, the atmospheric drag prevails over the solar-radiation thrust. By this value, an example of excursion module of 1500 kg in total mass is built, and the sailcraft sail loading and the return payload are calculated. Finally, the concept of launch opportunity-wide for a shuttle driven by solar-photon sail is introduced. The previous fifty-fifty solution may be a good initial guess for the trajectory optimization of this type of shuttle.
Numerical phase retrieval from beam intensity measurements in three planes
NASA Astrophysics Data System (ADS)
Bruel, Laurent
2003-05-01
A system and method have been developed at CEA to retrieve phase information from multiple intensity measurements along a laser beam. The device has been patented. Commonly used devices for beam measurement provide phase and intensity information separately or with a rather poor resolution whereas the MIROMA method provides both at the same time, allowing direct use of the results in numerical models. Usual phase retrieval algorithms use two intensity measurements, typically the image plane and the focal plane (Gerschberg-Saxton algorithm) related by a Fourier transform, or the image plane and a lightly defocus plane (D.L. Misell). The principal drawback of such iterative algorithms is their inability to provide unambiguous convergence in all situations. The algorithms can stagnate on bad solutions and the error between measured and calculated intensities remains unacceptable. If three planes rather than two are used, the data redundancy created confers to the method good convergence capability and noise immunity. It provides an excellent agreement between intensity determined from the retrieved phase data set in the image plane and intensity measurements in any diffraction plane. The method employed for MIROMA is inspired from GS algorithm, replacing Fourier transforms by a beam-propagating kernel with gradient search accelerating techniques and special care for phase branch cuts. A fast one dimensional algorithm provides an initial guess for the iterative algorithm. Applications of the algorithm on synthetic data find out the best reconstruction planes that have to be chosen. Robustness and sensibility are evaluated. Results on collimated and distorted laser beams are presented.
What neuroscience can tell about intuitive processes in the context of perceptual discovery.
Volz, Kirsten G; von Cramon, D Yves
2006-12-01
According to the Oxford English Dictionary, intuition is "the ability to understand or know something immediately, without conscious reasoning." Most people would agree that intuitive responses appear as ideas or feelings that subsequently guide our thoughts and behaviors. It is proposed that people continuously, without conscious attention, recognize patterns in the stream of sensations that impinge upon them. What exactly is being recognized is not clear yet, but we assume that people detect potential content based on only a few aspects of the input (i.e., the gist). The result is a vague perception of coherence which is not explicitly describable but instead embodied in a "gut feeling" or an initial guess, which subsequently biases thought and inquiry. To approach the nature of intuitive processes, we used functional magnetic resonance imaging when participants were working at a modified version of the Waterloo Gestalt Closure Task. Starting from our conceptualization that intuition involves an informed judgment in the context of discovery, we expected activation within the median orbito-frontal cortex (OFC), as this area receives input from all sensory modalities and has been shown to be crucially involved in emotionally driven decisions. Results from a direct contrast between intuitive and nonintuitive judgments, as well as from a parametric analysis, revealed the median OFC, the lateral portion of the amygdala, anterior insula, and ventral occipito-temporal regions to be activated. Based on these findings, we suggest our definition of intuition to be promising and a good starting point for future research on intuitive processes.
Haimes, E.; Taylor, K.
2009-01-01
BACKGROUND This article reports on an investigation of the views of IVF couples asked to donate fresh embryos for research and contributes to the debates on: the acceptability of human embryonic stem cell (hESC) research, the moral status of the human embryo and embryo donation for research. METHODS A hypothesis-generating design was followed. All IVF couples in one UK clinic who were asked to donate embryos in 1 year were contacted 6 weeks after their pregnancy result. Forty four in-depth interviews were conducted. RESULTS Interviewees were preoccupied with IVF treatment and the request to donate was a secondary consideration. They used a complex and dynamic system of embryo classification. Initially, all embryos were important but then their focus shifted to those that had most potential to produce a baby. At that point, ‘other’ embryos were less important though they later realise that they did not know what happened to them. Guessing that these embryos went to research, interviewees preferred not to contemplate what that might entail. The embryos that caused interviewees most concern were good quality embryos that might have produced a baby but went to research instead. ‘The’ embryo, the morally laden, but abstract, entity, did not play a central role in their decision-making. CONCLUSIONS This study, despite missing those who refuse to donate embryos, suggests that debates on embryo donation for hESC research should include the views of embryo donors and should consider the social, as well as the moral, status of the human embryo. PMID:19502616
NASA Astrophysics Data System (ADS)
Nandipati, K. R.; Singh, H.; Nagaprasad Reddy, S.; Kumar, K. A.; Mahapatra, S.
2014-12-01
Optimally controlled initiation of intramolecular H-transfer in malonaldehyde is accomplished by designing a sequence of ultrashort (~80 fs) down-chirped pump-dump ultra violet (UV)-laser pulses through an optically bright electronic excited [ S 2 ( π π ∗)] state as a mediator. The sequence of such laser pulses is theoretically synthesized within the framework of optimal control theory (OCT) and employing the well-known pump-dump scheme of Tannor and Rice [D.J. Tannor, S.A. Rice, J. Chem. Phys. 83, 5013 (1985)]. In the OCT, the control task is framed as the maximization of cost functional defined in terms of an objective function along with the constraints on the field intensity and system dynamics. The latter is monitored by solving the time-dependent Schrödinger equation. The initial guess, laser driven dynamics and the optimized pulse structure (i.e., the spectral content and temporal profile) followed by associated mechanism involved in fulfilling the control task are examined in detail and discussed. A comparative account of the dynamical outcomes within the Condon approximation for the transition dipole moment versus its more realistic value calculated ab initio is also presented.
Numerical solutions of 3-dimensional Navier-Stokes equations for closed bluff-bodies
NASA Technical Reports Server (NTRS)
Abolhassani, J. S.; Tiwari, S. N.
1985-01-01
The Navier-Stokes equations are solved numerically. These equations are unsteady, compressible, viscous, and three-dimensional without neglecting any terms. The time dependency of the governing equations allows the solution to progress naturally for an arbitrary initial guess to an asymptotic steady state, if one exists. The equations are transformed from physical coordinates to the computational coordinates, allowing the solution of the governing equations in a rectangular parallelepiped domain. The equations are solved by the MacCormack time-split technique which is vectorized and programmed to run on the CDc VPS 32 computer. The codes are written in 32-bit (half word) FORTRAN, which provides an approximate factor of two decreasing in computational time and doubles the memory size compared to the 54-bit word size.
Global Search Capabilities of Indirect Methods for Impulsive Transfers
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong
2015-09-01
An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1977-01-01
A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.
Parallel Monotonic Basin Hopping for Low Thrust Trajectory Optimization
NASA Technical Reports Server (NTRS)
McCarty, Steven L.; McGuire, Melissa L.
2018-01-01
Monotonic Basin Hopping has been shown to be an effective method of solving low thrust trajectory optimization problems. This paper outlines an extension to the common serial implementation by parallelizing it over any number of available compute cores. The Parallel Monotonic Basin Hopping algorithm described herein is shown to be an effective way to more quickly locate feasible solutions, and improve locally optimal solutions in an automated way without requiring a feasible initial guess. The increased speed achieved through parallelization enables the algorithm to be applied to more complex problems that would otherwise be impractical for a serial implementation. Low thrust cislunar transfers and a hybrid Mars example case demonstrate the effectiveness of the algorithm. Finally, a preliminary scaling study quantifies the expected decrease in solve time compared to a serial implementation.,
NASA Astrophysics Data System (ADS)
Davis, A. B.; von Allmen, P. A.; Marshak, A.; Bal, G.
2010-12-01
The geometrical assumption in all operational cloud remote sensing algorithms is that clouds are plane-parallel slabs, which applies relatively well to the most uniform stratus layers. Its benefit is to justify using classic 1D radiative transfer (RT) theory, where angular details (solar, viewing, azimuthal) are fully accounted for and precise phase functions can be used, to generate the look-up tables used in the retrievals. Unsurprisingly, these algorithms catastrophically fail when applied to cumulus-type clouds, which are highly 3D. This is unfortunate for the cloud-process modeling community that may thrive on in situ airborne data, but would very much like to use satellite data for more than illustrations in their presentations and publications. So, how can we obtain quantitative information from space-based observations of finite aspect ratio clouds? Cloud base/top heights, vertically projected area, mean liquid water content (LWC), and volume-averaged droplet size would be a good start. Motivated by this science need, we present a new approach suitable for sparse cumulus fields where we turn the tables on the standard procedure in cloud remote sensing. We make no a priori assumption about cloud shape, save an approximately flat base, but use brutal approximations about the RT that is necessarily 3D. Indeed, the first order of business is to roughly determine the cloud's outer shape in one of two ways, which we will frame as competing initial guesses for the next phase of shape refinement and volume-averaged microphysical parameter estimation. Both steps use multi-pixel/multi-angle techniques amenable to MISR data, the latter adding a bi-spectral dimension using collocated MODIS data. One approach to rough cloud shape determination is to fit the multi-pixel/multi-angle data with a geometric primitive such as a scalene hemi-ellipsoid with 7 parameters (translation in 3D space, 3 semi-axes, 1 azimuthal orientation); for the radiometry, a simple radiosity-type model is used where the cloud surface "emits" either reflected (sunny-side) or transmitted (shady-side) light at different levels. As it turns out, the reflected/transmitted light ratio yields an approximate cloud optical thickness. Another approach is to invoke tomography techniques to define the volume occupied by the cloud using, as it were, cloud masks for each direction of observation. In the shape and opacity refinement phase, initial guesses along with solar and viewing geometry information are used to predict radiance in each pixel using a fast diffusion model for the 3D RT in MISR's non-absorbing red channel (275 m resolution). Refinement is constrained and stopped when optimal resolution is reached. Finally, multi-pixel/mono-angle MODIS data for the same cloud (at comparable 250 m resolution) reveals the desired droplet size information, hence the volume-averaged LWC. This is an ambitious remote sensing science project drawing on cross-disciplinary expertise gained in medical imaging using both X-ray and near-IR sources and detectors. It is high risk but with potentially high returns not only for the cloud modeling community but also aerosol and surface characterization in the presence of broken 3D clouds.
Using composite images to assess accuracy in personality attribution to faces.
Little, Anthony C; Perrett, David I
2007-02-01
Several studies have demonstrated some accuracy in personality attribution using only visual appearance. Using composite images of those scoring high and low on a particular trait, the current study shows that judges perform better than chance in guessing others' personality, particularly for the traits conscientiousness and extraversion. This study also shows that attractiveness, masculinity and age may all provide cues to assess personality accurately and that accuracy is affected by the sex of both of those judging and being judged. Individuals do perform better than chance at guessing another's personality from only facial information, providing some support for the popular belief that it is possible to assess accurately personality from faces.
Aorta modeling with the element-based zero-stress state and isogeometric discretization
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi
2017-02-01
Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.
Topology synthesis of planar ground structures for energy harvesting applications
NASA Astrophysics Data System (ADS)
Danzi, Francesco; Gibert, James; Cestino, Enrico; Frulla, Giacomo
2017-04-01
In this manuscript, we investigate the use topology optimization to design planar resonators with modal fre- quencies that occur at 1 : n ratios for kinetic energy scavenging of ambient vibrations that exhibit at least two frequency components. Furthermore, we are interested in excitations with a fundamental component containing large amounts of energy and secondary component with smaller energy content. This phenomenon is often seen in rotary machines; their frequency spectrum exhibits peaks on multiple harmonics, where the energy is primarily contained in the rotation frequency of the device. Several theoretical resonators are known to exhibit modal frequencies that at integer multiples 1:2 or 1:3. However, designing manufacturable resonators for other geometries is still a daunting task. With this goal in mind, we utilize topology optimization to determine the layout of the resonator. We formulate the problem in its non-dimensional form, eliminating the constraint on the allowable frequency. The frequency can be obtained a posteriori by means of linear scaling. Conversely, to previous research, which use the clamped beam as initial guess, we synthesize the final shape starting from a ground structure (or structural universe) and remove of the unnecessary beams from the initial guess by means of a graph-based filtering scheme. The algorithm determines the simplest structure that gives the desired frequency's ratio. Within the optimization, the structural design is accomplished by a linear FE analysis. The optimization reveals several trends, the most notable being that having members connected orthogonally as in the L-shaped resonator is not the preferred topology of this devices. In order to fully explore the angle of orientation of connected members on the modal characteristics of the device; we derive a reduced-order model that allows a bifurcation analysis on the effect of member orientation on modal frequency. Furthermore, the reduced order approximation is used solve the coupled electro-mechanical equation of a vibration based energy harvester (VEH). Finally, we present the performance of the VEH under various base excitations. These results show an infinite number of topologies that can have integer ratio modal frequencies, and in some cases harvest more power than a nominal L shaped harvester, operating in the linear regime.
Subjective measures of unconscious knowledge.
Dienes, Zoltán
2008-01-01
The chapter gives an overview of the use of subjective measures of unconscious knowledge. Unconscious knowledge is knowledge we have, and could very well be using, but we are not aware of. Hence appropriate methods for indicating unconscious knowledge must show that the person (a) has knowledge but (b) does not know that she has it. One way of determining awareness of knowing is by taking confidence ratings after making judgments. If the judgments are above baseline but the person believes they are guessing (guessing criterion) or confidence does not relate to accuracy (zero-correlation criterion) there is evidence of unconscious knowledge. The way these methods can deal with the problem of bias is discussed, as is the use of different types of confidence scales. The guessing and zero-correlation criteria show whether or not the person is aware of knowing the content of the judgment, but not whether the person is aware of what any knowledge was that enabled the judgment. Thus, a distinction is made between judgment and structural knowledge, and it is shown how the conscious status of the latter can also be assessed. Finally, the use of control over the use of knowledge as a subjective measure of judgment knowledge is illustrated. Experiments using artificial grammar learning and a serial reaction time task explore these issues.
2017-01-01
In three experiments, we asked whether diverse scripts contain interpretable information about the speech sounds they represent. When presented with a pair of unfamiliar letters, adult readers correctly guess which is /i/ (the ‘ee’ sound in ‘feet’), and which is /u/ (the ‘oo’ sound in ‘shoe’) at rates higher than expected by chance, as shown in a large sample of Singaporean university students (Experiment 1) and replicated in a larger sample of international Internet users (Experiment 2). To uncover what properties of the letters contribute to different scripts' ‘guessability,’ we analysed the visual spatial frequencies in each letter (Experiment 3). We predicted that the lower spectral frequencies in the formants of the vowel /u/ would pattern with lower spatial frequencies in the corresponding letters. Instead, we found that across all spatial frequencies, the letter with more black/white cycles (i.e. more ink) was more likely to be guessed as /u/, and the larger the difference between the glyphs in a pair, the higher the script's guessability. We propose that diverse groups of humans across historical time and geographical space tend to employ similar iconic strategies for representing speech in visual form, and provide norms for letter pairs from 56 diverse scripts. PMID:28989784
The impact of age stereotypes on source monitoring in younger and older adults.
Kuhlmann, Beatrice G; Bayen, Ute J; Meuser, Katharina; Kornadt, Anna E
2016-12-01
In 2 experiments, we examined reliance on age stereotypes when reconstructing the sources of statements. Two sources presented statements (half typical for a young adult, half for an old adult). Afterward, the sources' ages-23 and 70 years-were revealed and participants completed a source-monitoring task requiring attribution of statements to the sources. Multinomial model-based analyses revealed no age-typicality effect on source memory; however, age-typicality biased source-guessing: When not remembering the source, participants predominantly guessed the source for whose age the statement was typical. Thereby, people retrospectively described the sources as having made more statements that fit with stereotypes about their age group than they had truly made. In Experiment 1, older (60-84 years) participants' guessing bias was stronger than younger (17-26 years) participants', but they also had poorer source memory. Furthermore, older adults with better source memory were less biased than those with poorer source memory. Similarly, younger adults' age-stereotype reliance was larger when source memory was impaired in Experiment 2. Thus, age stereotypes bias source attributions, and individuals with poor source memory are particularly prone to this bias, which may contribute to the maintenance of age stereotypes over time. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bethe, Oppenheimer, Teller and the Fermi Award: Norris Bradbury Speaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meade, Roger Allen
In 1956 the Enrico Fermi Presidential Award was established to recognize scientists, engineers, and science policymakers who gave unstintingly over their careers to advance energy science and technology. The first recipient was John von Neumann. .1 Among those scientists who were thought eligible for the award were Hans Bethe, J. Robert Oppenheimer, and Edward Teller. In 1959 Norris Bradbury was asked to comment on the relative merits of each these three men, whom he knew well from their affiliation with Los Alamos. Below is a reproduction of the letter Bradbury sent to Dr. Warren C. Johnson of the AEC’s Generalmore » Advisory Committee(GAC) containing his evaluation of each man. The letter might surprise those not accustomed to Bradbury’s modus operandi of providing very detailed and forthright answers to the AEC. The letter, itself, was found in cache of old microfilm. Whether because of the age of the microfilm or the quality of the filming process, portions of the letter are not legible. Where empty brackets appear, the word or words could not be read or deduced. Words appearing in brackets are guesses that appear, from the image, to be what was written. These guesses, of course, are just that – guesses.« less
Lee, Jeannette Y; Moore, Page; Kusek, John; Barry, Michael
2014-01-01
This report assesses participant perception of treatment assignment in a randomized, double-blind, placebo-controlled trial of saw palmetto for the treatment of benign prostatic hyperplasia (BCM). Participants randomized to receive saw palmetto were instructed to take one 320 mg gelcap daily for the first 24 weeks, two 320 mg gelcaps daily for the second 24 weeks, and three 320 mg gelcaps daily for the third 24 weeks. Study participants assigned to placebo were instructed to take the same number of matching placebo gelcaps in each time period. At 24, 48, and 72 weeks postrandomization, the American Urological Association Symptom Index (AUA-SI) was administered and participants were asked to guess their treatment assignment. The study was conducted at 11 clinical centers in North America. Study participants were men, 45 years and older, with moderate to low severe BPH symptoms, randomized to saw palmetto (N=151) or placebo (N=155). Treatment arms were compared with respect to the distribution of participant guesses of treatment assignment. For participants assigned to saw palmetto, 22.5%, 24.7%, and 29.8% correctly thought they were taking saw palmetto, and 37.3%, 40.0%, and 44.4% incorrectly thought they were on placebo at 24, 48, and 72 weeks, respectively. For placebo participants, 21.8%, 27.4%, and 25.2% incorrectly thought they were on saw palmetto, and 41.6%, 39.9%, and 42.6% correctly thought they were on placebo at 24, 48, and 72 weeks, respectively. The treatment arms did not vary with respect to the distributions of participants who guessed they were on saw palmetto (p=0.823) or placebo (p=0.893). Participants who experienced an improvement in AUA-SI were 2.16 times more likely to think they were on saw palmetto. Blinding of treatment assignment was successful in this study. Improvement in BPH-related symptoms was associated with the perception that participants were taking saw palmetto.
Noorbaloochi, Sharareh; Sharon, Dahlia; McClelland, James L
2015-08-05
We used electroencephalography (EEG) and behavior to examine the role of payoff bias in a difficult two-alternative perceptual decision under deadline pressure in humans. The findings suggest that a fast guess process, biased by payoff and triggered by stimulus onset, occurred on a subset of trials and raced with an evidence accumulation process informed by stimulus information. On each trial, the participant judged whether a rectangle was shifted to the right or left and responded by squeezing a right- or left-hand dynamometer. The payoff for each alternative (which could be biased or unbiased) was signaled 1.5 s before stimulus onset. The choice response was assigned to the first hand reaching a squeeze force criterion and reaction time was defined as time to criterion. Consistent with a fast guess account, fast responses were strongly biased toward the higher-paying alternative and the EEG exhibited an abrupt rise in the lateralized readiness potential (LRP) on a subset of biased payoff trials contralateral to the higher-paying alternative ∼ 150 ms after stimulus onset and 50 ms before stimulus information influenced the LRP. This rise was associated with poststimulus dynamometer activity favoring the higher-paying alternative and predicted choice and response time. Quantitative modeling supported the fast guess account over accounts of payoff effects supported in other studies. Our findings, taken with previous studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy for the integration of payoff and stimulus information. Humans and other animals often face situations in which they must make choices based on uncertain sensory information together with information about expected outcomes (gains or losses) about each choice. We investigated how differences in payoffs between available alternatives affect neural activity, overt choice, and the timing of choice responses. In our experiment, in which participants were under strong time pressure, neural and behavioral findings together with model fitting suggested that our human participants often made a fast guess toward the higher reward rather than integrating stimulus and payoff information. Our findings, taken with findings from other studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy. Copyright © 2015 the authors 0270-6474/15/3510989-23$15.00/0.
Dubský, Pavel; Ördögová, Magda; Malý, Michal; Riesová, Martina
2016-05-06
We introduce CEval software (downloadable for free at echmet.natur.cuni.cz) that was developed for quicker and easier electrophoregram evaluation and further data processing in (affinity) capillary electrophoresis. This software allows for automatic peak detection and evaluation of common peak parameters, such as its migration time, area, width etc. Additionally, the software includes a nonlinear regression engine that performs peak fitting with the Haarhoff-van der Linde (HVL) function, including automated initial guess of the HVL function parameters. HVL is a fundamental peak-shape function in electrophoresis, based on which the correct effective mobility of the analyte represented by the peak is evaluated. Effective mobilities of an analyte at various concentrations of a selector can be further stored and plotted in an affinity CE mode. Consequently, the mobility of the free analyte, μA, mobility of the analyte-selector complex, μAS, and the apparent complexation constant, K('), are first guessed automatically from the linearized data plots and subsequently estimated by the means of nonlinear regression. An option that allows two complexation dependencies to be fitted at once is especially convenient for enantioseparations. Statistical processing of these data is also included, which allowed us to: i) express the 95% confidence intervals for the μA, μAS and K(') least-squares estimates, ii) do hypothesis testing on the estimated parameters for the first time. We demonstrate the benefits of the CEval software by inspecting complexation of tryptophan methyl ester with two cyclodextrins, neutral heptakis(2,6-di-O-methyl)-β-CD and charged heptakis(6-O-sulfo)-β-CD. Copyright © 2016 Elsevier B.V. All rights reserved.
Global terrestrial carbon and nitrogen cycling insensitive to estimates of biological N fixation
NASA Astrophysics Data System (ADS)
Steinkamp, J.; Weber, B.; Werner, C.; Hickler, T.
2015-12-01
Dinitrogen (N2) is the most abundant molecule in the atmosphere and incorporated in other molecules an essential nutrient for life on earth. However, only few natural processes can initiate a reaction of N2. These natural processes are fire, lightning and biological nitrogen fixation (BNF) with BNF being the largest source. In the course of the last century humans have outperformed the natural processes of nitrogen fixation by the production of fertilizer. Industrial and other human emission of reactive nitrogen, as well as fire and lightning lead to a deposition of 63 Tg (N) per year. This is twice the amount of BNF estimated by the default setup of the dynamic global vegetation model LPJ-GUESS (30 Tg), which is a conservative approach. We use different methods and parameterizations for BNF in LPJ-GUESS: 1.) varying total annual amount; 2.) annual evenly distributed and daily calculated fixation rates; 3.) an improved dataset of BNF by cryptogamic covers (free-living N-fixers). With this setup BNF is ranging from 30 Tg to 60 Tg. We assess the impact of BNF on carbon storage and grand primary production (GPP) of the natural vegetation. These results are compared to and evaluated against available independent datasets. We do not see major differences in the productivity and carbon stocks with these BNF estimates, suggesting that natural vegetation is insensitive to BNF on a global scale and the vegetation can compensate for the different nitrogen availabilities. Current deposition of nitrogen compounds and internal cycling through mineralization and uptake is sufficient for natural vegetation productivity. However, due to the coarse model grid and spatial heterogeneity in the real world this conclusion does not exclude the existence of habitats constrained by BNF.
Global Emissions of Terpenoid VOCs from Terrestrial Vegetation in the Last Millennium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acosta Navarro, J. C.; Smolander, S.; Struthers, H.
2014-06-16
We investigated the millennial variability of global BVOC emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene and sesquiterpene and Lund-Potsdam-Jena General Ecosystem Simulator (LPJ8 GUESS), for isoprene and monoterpenes. We found the millennial trends of global isoprene emissions to be mostly affected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have signicant short term globalmore » effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr-1 (13% and 19% less than during during 1750-1850 and 1000- 15 1200, respectively) and LPJ-GUESS emissions were 323 TgC yr-1 (15% and 20% less than during 1750-1850 and 1000-1200, respectively). Monoterpene emissions were 89 TgC yr-1 (10% and 6% higher than during 1750-1850 and 1000-1200, respectively) in MEGAN, and 24 TgC yr-1 (2% higher and 5% 19 20 less than during 1750-1850 and 1000-1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr-1 (10% and 4% higher than during1750-1850 and 1000-1200, respectively). Although both models capture similar We investigated the millennial variability of global BVOC emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene and sesquiterpene and Lund-Potsdam-Jena General Ecosystem Simulator (LPJ8GUESS), for isoprene and monoterpenes. We found the millennial trends ofglobal isoprene emissions to be mostly a*ected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have signifcant short term global effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr-1 (13% and 19% less than during during 1750-1850 and 1000- 1200, respectively) and LPJ-GUESS emissions were 323 TgC yr-1 (15% and 16 17 20% less than during 1750-1850 and 1000-1200, respectively). Monoterpene emissions were 89 TgC yr-1 (10% and 6% higher than during 1750-1850 and 18 1000-1200, respectively) in MEGAN, and 24 TgC yr-1 (2% higher and 5% less than during 1750-1850 and 1000-1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr-1 (10% and 4% higher than during1750-1850 and 1000-1200, respectively). Although both models capture similar emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation.emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation.« less
Numerical solutions of Navier-Stokes equations for a Butler wing
NASA Technical Reports Server (NTRS)
Abolhassani, J. S.; Tiwari, S. N.
1985-01-01
The flow field is simulated on the surface of a given delta wing (Butler wing) at zero incident in a uniform stream. The simulation is done by integrating a set of flow field equations. This set of equations governs the unsteady, viscous, compressible, heat conducting flow of an ideal gas. The equations are written in curvilinear coordinates so that the wing surface is represented accurately. These equations are solved by the finite difference method, and results obtained for high-speed freestream conditions are compared with theoretical and experimental results. In this study, the Navier-Stokes equations are solved numerically. These equations are unsteady, compressible, viscous, and three-dimensional without neglecting any terms. The time dependency of the governing equations allows the solution to progress naturally for an arbitrary initial initial guess to an asymptotic steady state, if one exists. The equations are transformed from physical coordinates to the computational coordinates, allowing the solution of the governing equations in a rectangular parallel-piped domain. The equations are solved by the MacCormack time-split technique which is vectorized and programmed to run on the CDC VPS 32 computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
Optimal coherent control of dissipative N -level systems
NASA Astrophysics Data System (ADS)
Jirari, H.; Pötz, W.
2005-07-01
General optimal coherent control of dissipative N -level systems in the Markovian time regime is formulated within Pointryagin’s principle and the Lindblad equation. In the present paper, we study feasibility and limitations of steering of dissipative two-, three-, and four-level systems from a given initial pure or mixed state into a desired final state under the influence of an external electric field. The time evolution of the system is computed within the Lindblad equation and a conjugate gradient method is used to identify optimal control fields. The influence of both field-independent population and polarization decay on achieving the objective is investigated in systematic fashion. It is shown that, for realistic dephasing times, optimum control fields can be identified which drive the system into the target state with very high success rate and in economical fashion, even when starting from a poor initial guess. Furthermore, the optimal fields obtained give insight into the system dynamics. However, if decay rates of the system cannot be subjected to electromagnetic control, the dissipative system cannot be maintained in a specific pure or mixed state, in general.
Capture of near-Earth objects with low-thrust propulsion and invariant manifolds
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, Fanghua
2016-01-01
In this paper, a mission incorporating low-thrust propulsion and invariant manifolds to capture near-Earth objects (NEOs) is investigated. The initial condition has the spacecraft rendezvousing with the NEO. The mission terminates once it is inserted into a libration point orbit (LPO). The spacecraft takes advantage of stable invariant manifolds for low-energy ballistic capture. Low-thrust propulsion is employed to retrieve the joint spacecraft-asteroid system. Global optimization methods are proposed for the preliminary design. Local direct and indirect methods are applied to optimize the two-impulse transfers. Indirect methods are implemented to optimize the low-thrust trajectory and estimate the largest retrievable mass. To overcome the difficulty that arises from bang-bang control, a homotopic approach is applied to find an approximate solution. By detecting the switching moments of the bang-bang control the efficiency and accuracy of numerical integration are guaranteed. By using the homotopic approach as the initial guess the shooting function is easy to solve. The relationship between the maximum thrust and the retrieval mass is investigated. We find that both numerically and theoretically a larger thrust is preferred.
All-electron density functional calculation on insulin with quasi-canonical localized orbitals.
Inaba, Toru; Tahara, Saisei; Nisikawa, Nobutaka; Kashiwagi, Hiroshi; Sato, Fumitoshi
2005-07-30
An all-electron density functional (DF) calculation on insulin was performed by the Gaussian-based DF program, ProteinDF. Quasi-canonical localized orbitals (QCLOs) were used to improve the initial guess for the self-consistent field (SCF) calculation. All calculations were carried out by parallel computing on eight processors of an Itanium2 cluster (SGI Altix3700) with a theoretical peak performance of 41.6 GFlops. It took 35 h for the whole calculation. Insulin is a protein hormone consisting of two peptide chains linked by three disulfide bonds. The numbers of residues, atoms, electrons, orbitals, and auxiliary functions are 51, 790, 3078, 4439, and 8060, respectively. An all-electron DF calculation on insulin was successfully carried out, starting from connected QCLOs. Regardless of a large molecule with complicated topology, the differences in the total energy and the Mulliken atomic charge between initial and converged wavefunctions were very small. The calculation proceeded smoothly without any trial and error, suggesting that this is a promising method to obtain SCF convergence on large molecules such as proteins.
Land Surface Albedo from MERIS Reflectances Using MODIS Directional Factors
NASA Technical Reports Server (NTRS)
Schaaf, Crystal L. B.; Gao, Feng; Strahler, Alan H.
2004-01-01
MERIS Level 2 surface reflectance products are now available to the scientific community. This paper demonstrates the production of MERIS-derived surface albedo and Nadir Bidirectional Reflectance Distribution Function (BRDF) adjusted reflectances by coupling the MERIS data with MODIS BRDF products. Initial efforts rely on the specification of surface anisotropy as provided by the global MODIS BRDF product for a first guess of the shape of the BRDF and then make use all of the coincidently available, partially atmospherically corrected, cloud cleared, MERIS observations to generate MERIS-derived BRDF and surface albedo quantities for each location. Comparisons between MODIS (aerosol-corrected) and MERIS (not-yet aerosol-corrected) surface values from April and May 2003 are also presented for case studies in Spain and California as well as preliminary comparisons with field data from the Devil's Rock Surfrad/BSRN site.
Deriving Albedo from Coupled MERIS and MODIS Surface Products
NASA Technical Reports Server (NTRS)
Gao, Feng; Schaaf, Crystal; Jin, Yu-Fang; Lucht, Wolfgang; Strahler, Alan
2004-01-01
MERIS Level 2 surface reflectance products are now available to the scientific community. This paper demonstrates the production of MERIS-derived surface albedo and Nadir Bidirectional Reflectance Distribution Function (BRDF) adjusted reflectances by coupling the MERIS data with MODIS BRDF products. Initial efforts rely on the specification of surface anisotropy as provided by the global MODIS BRDF product for a first guess of the shape of the BRDF and then make use all of the coincidently available, partially atmospherically corrected, cloud cleared, MERIS observations to generate MERIS-derived BRDF and surface albedo quantities for each location. Comparisons between MODIS (aerosol-corrected) and MERIS (not-yet aerosol-corrected) surface values from April and May 2003 are also presented for case studies in Spain and California as well as preliminary comparisons with field data from the Devil's Rock Surfrad/BSRN site.
Solving free-plasma-boundary problems with the SIESTA MHD code
NASA Astrophysics Data System (ADS)
Sanchez, R.; Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Tribaldos, V.; Geiger, J.; Hirshman, S. P.; Cianciosa, M.
2017-10-01
SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for 3D magnetic configurations. It is an iterative code that uses the solution obtained by the VMEC code to provide a background coordinate system and an initial guess of the solution. The final solution that SIESTA finds can exhibit magnetic islands and stochastic regions. In its original implementation, SIESTA addressed only fixed-boundary problems. This fixed boundary condition somewhat restricts its possible applications. In this contribution we describe a recent extension of SIESTA that enables it to address free-plasma-boundary situations, opening up the possibility of investigating problems with SIESTA in which the plasma boundary is perturbed either externally or internally. As an illustration, the extended version of SIESTA is applied to a configuration of the W7-X stellarator.
Calibrating the orientation between a microlens array and a sensor based on projective geometry
NASA Astrophysics Data System (ADS)
Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan
2016-07-01
We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.
Navigation systems. [for interplanetary flight
NASA Technical Reports Server (NTRS)
Jordan, J. F.
1985-01-01
The elements of the measurement and communications network comprising the global deep space navigation system (DSN) for NASA missions are described. Among the measurement systems discussed are: VLBI, two-way Doppler and range measurements, and optical measurements carried out on board the spacecraft. Processing of navigation measurement is carried out using two modules: an N-body numerical integration of the trajectory (and state transition partial derivatives) based on pre-guessed initial conditions; and partial derivatives of simulated observables corresponding to each actual observation. Calculations of velocity correction parameters is performed by precise modelling of all physical phenomena influencing the observational measurements, including: planetary motions; tracking station locations, gravity field structure, and transmission media effects. Some of the contributions to earth-relative orbit estimate errors for the Doppler/range system on board Voyager are discussed in detail. A line drawing of the DSN navigation system is provided.
Reflections on the evidence for a vulnerability locus for Schizophrenia on chromosome 6p24-22
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendler, K.S.; Straub, R.E.; MacLean, C.J.
A recent series of studies have attempted to replicate evidence for a vulnerability locus for schizophrenia on chromosome 6p initially detected in the Irish Study of High-Density Schizophrenia Families (ISHDSF). Here, we want to comment briefly on these findings and respond to some of the issues raised in the preceding article by Baron. We disclaim, however, any pretensions to a definitive interpretation of the available evidence. Our level of ignorance in the interpretation of linkage evidence for complex psychiatric syndromes is too profound. Rather, we seek to make educated guesses on the basis of our understanding of the principles ofmore » linkage analysis, on our knowledge of the problems of statistical inference and on our intuition of how genes might influence vulnerability to complex human behavioral traits. 27 refs.« less
Analysis and design of friction stir welding tool
NASA Astrophysics Data System (ADS)
Jagadeesha, C. B.
2016-12-01
Since its inception no one has done analysis and design of FSW tool. Initial dimensions of FSW tool are decided by educated guess. Optimum stresses on tool pin have been determined at optimized parameters for bead on plate welding on AZ31B-O Mg alloy plate. Fatigue analysis showed that the chosen FSW tool for the welding experiment has not ∞ life and it has determined that the life of FSW tool is 2.66×105 cycles or revolutions. So one can conclude that any arbitrarily decided FSW tool generally has finite life and cannot be used for ∞ life. In general, one can determine the suitability of tool and its material to be used in FSW of the given workpiece materials in advance by this analysis in terms of fatigue life of the tool.
NASA Astrophysics Data System (ADS)
Chen, Shiyu; Li, Haiyang; Baoyin, Hexi
2018-06-01
This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.
Low-Thrust Transfers from Distant Retrograde Orbits to L2 Halo Orbits in the Earth-Moon System
NASA Technical Reports Server (NTRS)
Parrish, Nathan L.; Parker, Jeffrey S.; Hughes, Steven P.; Heiligers, Jeannette
2016-01-01
This paper presents a study of transfers between distant retrograde orbits (DROs) and L2 halo orbits in the Earth-Moon system that could be flown by a spacecraft with solar electric propulsion (SEP). Two collocation-based optimal control methods are used to optimize these highly-nonlinear transfers: Legendre pseudospectral and Hermite-Simpson. Transfers between DROs and halo orbits using low-thrust propulsion have not been studied previously. This paper offers a study of several families of trajectories, parameterized by the number of orbital revolutions in a synodic frame. Even with a poor initial guess, a method is described to reliably generate families of solutions. The circular restricted 3-body problem (CRTBP) is used throughout the paper so that the results are autonomous and simpler to understand.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Qayyum, Sumaira; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
Flow of second grade fluid by a rotating disk with heat and mass transfer is discussed. Additional effects of heat generation/absorption are also analyzed. Flow is also subjected to homogeneous-heterogeneous reactions. The convergence of computed solution is assured through appropriate choices of initial guesses and auxiliary parameters. Investigation is made for the effects of involved parameters on velocities (radial, axial, tangential), temperature and concentration. Skin friction and Nusselt number are also analyzed. Graphical results depict that an increase in viscoelastic parameter enhances the axial, radial and tangential velocities. Opposite behavior of temperature is observed for larger values of viscoelastic and heat generation/absorption parameters. Concentration profile is increasing function of Schmidt number, viscoelastic parameter and heterogeneous reaction parameter. Magnitude of skin friction and Nusselt number are enhanced for larger viscoelastic parameter.
Testing the limits of optimality: the effect of base rates in the Monty Hall dilemma.
Herbranson, Walter T; Wang, Shanglun
2014-03-01
The Monty Hall dilemma is a probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial selection, one of the nonchosen doors is opened, revealing that it is not a winner, and the player is given the choice of staying with the initial selection or switching to the other remaining door. Pigeons and humans were tested on two variants of the Monty Hall dilemma, in which one of the three doors had either a higher or a lower chance of containing the prize than did the other two options. The optimal strategy in both cases was to initially choose the lowest-probability door available and then switch away from it. Whereas pigeons learned to approximate the optimal strategy, humans failed to do so on both accounts: They did not show a preference for low-probability options, and they did not consistently switch. An analysis of performance over the course of training indicated that pigeons learned to perform a sequence of responses on each trial, and that sequence was one that yielded the highest possible rate of reinforcement. Humans, in contrast, continued to vary their responses throughout the experiment, possibly in search of a more complex strategy that would exceed the maximum possible win rate.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
3D/2D image registration using weighted histogram of gradient directions
NASA Astrophysics Data System (ADS)
Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang
2015-03-01
Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.
NASA Astrophysics Data System (ADS)
Langenbrunner, B.
2017-12-01
To learn how the world will change because of human-caused warming, we use computer-made worlds that couple land, water, and air to study their responses to the causes of warming over many years. For changes to rain and falling ice-water, these computer worlds are great at answering questions about very large places, like big areas of land or water, but they are not as good when thinking about more focused areas, like cities or states. This is especially true in the state where this meeting happens most years; will it be wetter or drier by the year 2100, and by how much? I will talk about the work being done to learn why these computer worlds do not always agree, as well as the work that finds changes on which they do agree. One big reason they don't agree is because these computer worlds arrive at different guesses on how winds will shift high up in the air in cooler months. These winds will push rain and falling ice-water to different places up and down the state over time, making it hard to know what we can expect, though our best guess is that it will be ever-so-slightly wetter. Computer worlds do agree, however, on two important things across most of the state: that the very largest bursts of rain will happen more often as the world warms, and that more often, very wet years will follow very dry years immediately before them. Taken together, these changes are important to the those in the state who plan for up-coming water needs. Knowing how normal rain and ice-water will change is part of the story, but perhaps more important is understanding how the very biggest showers are shifting, which will help the state plan for and handle these more sudden (and serious) bursts of water.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Lin, Yi-Chung; Pandy, Marcus G
2017-07-05
The aim of this study was to perform full-body three-dimensional (3D) dynamic optimization simulations of human locomotion by driving a neuromusculoskeletal model toward in vivo measurements of body-segmental kinematics and ground reaction forces. Gait data were recorded from 5 healthy participants who walked at their preferred speeds and ran at 2m/s. Participant-specific data-tracking dynamic optimization solutions were generated for one stride cycle using direct collocation in tandem with an OpenSim-MATLAB interface. The body was represented as a 12-segment, 21-degree-of-freedom skeleton actuated by 66 muscle-tendon units. Foot-ground interaction was simulated using six contact spheres under each foot. The dynamic optimization problem was to find the set of muscle excitations needed to reproduce 3D measurements of body-segmental motions and ground reaction forces while minimizing the time integral of muscle activations squared. Direct collocation took on average 2.7±1.0h and 2.2±1.6h of CPU time, respectively, to solve the optimization problems for walking and running. Model-computed kinematics and foot-ground forces were in good agreement with corresponding experimental data while the calculated muscle excitation patterns were consistent with measured EMG activity. The results demonstrate the feasibility of implementing direct collocation on a detailed neuromusculoskeletal model with foot-ground contact to accurately and efficiently generate 3D data-tracking dynamic optimization simulations of human locomotion. The proposed method offers a viable tool for creating feasible initial guesses needed to perform predictive simulations of movement using dynamic optimization theory. The source code for implementing the model and computational algorithm may be downloaded at http://simtk.org/home/datatracking. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ziaei, Vafa; Bredow, Thomas
2016-11-07
In this work, we apply many-body perturbation theory (MBPT) on large critical charge transfer (CT) complexes to assess its performance on the S 1 excitation energy. Since the S 1 energy of CT compounds is heavily dependent on the Hartree-Fock (HF) exchange fraction in the reference density functional, MBPT opens a new way for reliable prediction of CT S 1 energy without explicit knowledge of suitable amount of HF-exchange, in contrary to the time-dependent density functional theory (TD-DFT), where depending on various functionals, large errors can arise. Thus, simply by starting from a (semi-)local reference functional and performing update of Kohn-Sham (KS) energies in the Green's function G while keeping dynamical screened interaction (W(ω)) frozen to the mean-field level, we obtain impressingly highly accurate S 1 energy at slightly higher computational cost in comparison to TD-DFT. However, this energy-only updating mechanism in G fails to work if the initial guess contains a fraction or 100% HF-exchange, and hence considerably inaccurate S 1 energy is predicted. Furthermore, eigenvalue updating both in G and W(ω) overshoots the S 1 energy due to enhanced underscreening of W(ω), independent of the (hybrid-)DFT starting orbitals. A full energy-update on top of HF orbitals even further overestimates the S 1 energy. An additional update of KS wave functions within the Quasi-Particle Self-Consistent GW (QSGW) deteriorates results, in stark contrast to the good results obtained from QSGW for periodic systems. For the sake of transferability, we further present data of small critical non-charge transfer systems, confirming the outcomes of the CT-systems.
Particle Swarm Optimization of Low-Thrust, Geocentric-to-Halo-Orbit Transfers
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.
Missions to Lagrange points are becoming increasingly popular amongst spacecraft mission planners. Lagrange points are locations in space where the gravity force from two bodies, and the centrifugal force acting on a third body, cancel. To date, all spacecraft that have visited a Lagrange point have done so using high-thrust, chemical propulsion. Due to the increasing availability of low-thrust (high efficiency) propulsive devices, and their increasing capability in terms of fuel efficiency and instantaneous thrust, it has now become possible for a spacecraft to reach a Lagrange point orbit without the aid of chemical propellant. While at any given time there are many paths for a low-thrust trajectory to take, only one is optimal. The traditional approach to spacecraft trajectory optimization utilizes some form of gradient-based algorithm. While these algorithms offer numerous advantages, they also have a few significant shortcomings. The three most significant shortcomings are: (1) the fact that an initial guess solution is required to initialize the algorithm, (2) the radius of convergence can be quite small and can allow the algorithm to become trapped in local minima, and (3) gradient information is not always assessable nor always trustworthy for a given problem. To avoid these problems, this dissertation is focused on optimizing a low-thrust transfer trajectory from a geocentric orbit to an Earth-Moon, L1, Lagrange point orbit using the method of Particle Swarm Optimization (PSO). The PSO method is an evolutionary heuristic that was originally written to model birds swarming to locate hidden food sources. This PSO method will enable the exploration of the invariant stable manifold of the target Lagrange point orbit in an effort to optimize the spacecraft's low-thrust trajectory. Examples of these optimized trajectories are presented and contrasted with those found using traditional, gradient-based approaches. In summary, the results of this dissertation find that the PSO method does, indeed, successfully optimize the low-thrust trajectory transfer problem without the need for initial guessing. Furthermore, a two-degree-of-freedom PSO problem formulation significantly outperformed a one-degree-of-freedom formulation by at least an order of magnitude, in terms of CPU time. Finally, the PSO method is also used to solve a traditional, two-burn, impulsive transfer to a Lagrange point orbit using a hybrid optimization algorithm that incorporates a gradient-based shooting algorithm as a pre-optimizer. Surprisingly, the results of this study show that "fast" transfers outperform "slow" transfers in terms of both Deltav and time of flight.
Defectors Cannot Be Detected during“Small Talk” with Strangers
Manson, Joseph H.; Gervais, Matthew M.; Kline, Michelle A.
2013-01-01
To account for the widespread human tendency to cooperate in one-shot social dilemmas, some theorists have proposed that cooperators can be reliably detected based on ethological displays that are difficult to fake. Experimental findings have supported the view that cooperators can be distinguished from defectors based on “thin slices” of behavior, but the relevant cues have remained elusive, and the role of the judge's perspective remains unclear. In this study, we followed triadic conversations among unacquainted same-sex college students with unannounced dyadic one-shot prisoner's dilemmas, and asked participants to guess the PD decisions made toward them and among the other two participants. Two other sets of participants guessed the PD decisions after viewing videotape of the conversations, either with foreknowledge (informed), or without foreknowledge (naïve), of the post-conversation PD. Only naïve video viewers approached better-than-chance prediction accuracy, and they were significantly accurate at predicting the PD decisions of only opposite-sexed conversation participants. Four ethological displays recently proposed to cue defection in one-shot social dilemmas (arms crossed, lean back, hand touch, and face touch) failed to predict either actual defection or guesses of defection by any category of observer. Our results cast doubt on the role of “greenbeard” signals in the evolution of human prosociality, although they suggest that eavesdropping may be more informative about others' cooperative propensities than direct interaction. PMID:24358201
Gamma activity modulated by naming of ambiguous and unambiguous images: intracranial recording
Cho-Hisamoto, Yoshimi; Kojima, Katsuaki; Brown, Erik C; Matsuzaki, Naoyuki; Asano, Eishi
2014-01-01
OBJECTIVE Humans sometimes need to recognize objects based on vague and ambiguous silhouettes. Recognition of such images may require an intuitive guess. We determined the spatial-temporal characteristics of intracranially-recorded gamma activity (at 50–120 Hz) augmented differentially by naming of ambiguous and unambiguous images. METHODS We studied ten patients who underwent epilepsy surgery. Ambiguous and unambiguous images were presented during extraoperative electrocorticography recording, and patients were instructed to overtly name the object as it is first perceived. RESULTS Both naming tasks were commonly associated with gamma-augmentation sequentially involving the occipital and occipital-temporal regions, bilaterally, within 200 ms after the onset of image presentation. Naming of ambiguous images elicited gamma-augmentation specifically involving portions of the inferior-frontal, orbitofrontal, and inferior-parietal regions at 400 ms and after. Unambiguous images were associated with more intense gamma-augmentation in portions of the occipital and occipital-temporal regions. CONCLUSIONS Frontal-parietal gamma-augmentation specific to ambiguous images may reflect the additional cortical processing involved in exerting intuitive guess. Occipital gamma-augmentation enhanced during naming of unambiguous images can be explained by visual processing of stimuli with richer detail. SIGNIFICANCE Our results support the theoretical model that guessing processes in visual domain occur following the accumulation of sensory evidence resulting from the bottom-up processing in the occipital-temporal visual pathways. PMID:24815577
RVB signatures in the spin dynamics of the square-lattice Heisenberg antiferromagnet
NASA Astrophysics Data System (ADS)
Ghioldi, E. A.; Gonzalez, M. G.; Manuel, L. O.; Trumper, A. E.
2016-03-01
We investigate the spin dynamics of the square-lattice spin-\\frac{1}{2} Heisenberg antiferromagnet by means of an improved mean-field Schwinger boson calculation. By identifying both, the long-range Néel and the RVB-like components of the ground state, we propose an educated guess for the mean-field magnetic excitation consisting on a linear combination of local and bond spin flips to compute the dynamical structure factor. Our main result is that when this magnetic excitation is optimized in such a way that the corresponding sum rule is fulfilled, we recover the low- and high-energy spectral weight features of the experimental spectrum. In particular, the anomalous spectral weight depletion at (π,0) found in recent inelastic neutron scattering experiments can be attributed to the interference of the triplet bond excitations of the RVB component of the ground state. We conclude that the Schwinger boson theory seems to be a good candidate to adequately interpret the dynamic properties of the square-lattice Heisenberg antiferromagnet.
Constable Receives 2013 William Gilbert Award: Response
NASA Astrophysics Data System (ADS)
Constable, Catherine
2014-07-01
Thank you for the generous citation. I am truly honored that the GP section considers me a worthy recipient of this award. As you might guess, I don't deserve exclusive credit for everything in Rick's citation, and if you look at the coauthors on all my publications, you can get a pretty good idea of who really did the work. I have been fortunate to work in a highly collegial environment at the University of California, San Diego, where my mentors, colleagues, postdocs, and students at Scripps Institution of Oceanography's Institute of Geophysics of Planetary Physics have been tremendously important to me. Additionally, this year (2013) marks 30 years since I first attended the AGU Fall Meeting, and the GP section has really been my scientific home within this organization, connecting me to a broader range of collaborators from all over the world. I owe thanks to a large number of people who have contributed to the fun of doing science and provided opportunities to discuss things in a highly collegial environment.
Xie, Qi; Liu, Wenhao; Wang, Shengbao; Han, Lidong; Hu, Bin; Wu, Ting
2014-09-01
Patient's privacy-preserving, security and mutual authentication between patient and the medical server are the important mechanism in connected health care applications, such as telecare medical information systems and personally controlled health records systems. In 2013, Wen showed that Das et al.'s scheme is vulnerable to the replay attack, user impersonation attacks and off-line guessing attacks, and then proposed an improved scheme using biometrics, password and smart card to overcome these weaknesses. However, we show that Wen's scheme is still vulnerable to off-line password guessing attacks, does not provide user's anonymity and perfect forward secrecy. Further, we propose an improved scheme to fix these weaknesses, and use the applied pi calculus based formal verification tool ProVerif to prove the security and authentication.
Major Upgrades to the AIRS Version-6 Water Vapor Profile Methodology
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Lee, Jae N.
2015-01-01
Additional changes in Version-6.19 include all previous updates made to the q(p) retrieval since Version-6: Modified Neural-Net q0(p) guess above the tropopause Linearly tapers the neural net guess to match climatology at 70 mb, not at the top of the atmosphereChanged the 11 trapezoid q(p) perturbation functions used in Version-6 so as to match the 24 functions used in T(p) retrieval step. These modifications resulted in improved water vapor profiles in Version-6.19 compared to Version-6.Version-6.19 is tested for all of August 2013 and August 2014, as well for select other days. Before finalized and operational in 2016, the V-6.19 can be acquired upon request for limited time intervals.
NASA Astrophysics Data System (ADS)
Zamora Ramos, Ernesto
Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.
A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Edwards, Jack R.; Mcrae, D. S.
1992-01-01
A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.
NASA Technical Reports Server (NTRS)
Poosti, Sassaneh; Akopyan, Sirvard; Sakurai, Regina; Yun, Hyejung; Saha, Pranjit; Strickland, Irina; Croft, Kevin; Smith, Weldon; Hoffman, Rodney; Koffend, John;
2006-01-01
TES Level 2 Subsystem is a set of computer programs that performs functions complementary to those of the program summarized in the immediately preceding article. TES Level-2 data pertain to retrieved species (or temperature) profiles, and errors thereof. Geolocation, quality, and other data (e.g., surface characteristics for nadir observations) are also included. The subsystem processes gridded meteorological information and extracts parameters that can be interpolated to the appropriate latitude, longitude, and pressure level based on the date and time. Radiances are simulated using the aforementioned meteorological information for initial guesses, and spectroscopic-parameter tables are generated. At each step of the retrieval, a nonlinear-least-squares- solving routine is run over multiple iterations, retrieving a subset of atmospheric constituents, and error analysis is performed. Scientific TES Level-2 data products are written in a format known as Hierarchical Data Format Earth Observing System 5 (HDF-EOS 5) for public distribution.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS
NASA Astrophysics Data System (ADS)
Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.
2013-02-01
Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.
Dutton, D L
1988-04-15
For many people, belief in the paranormal derives from personal experience of face-to-face interviews with astrologers, palm readers, aura and Tarot readers, and spirit mediums. These encounters typically involve cold reading, a process in which a reader makes calculated guesses about a client's background and problems and, depending on the reaction, elaborates a reading which seems to the client so uniquely appropriate that it carries with it the illusion of having been produced by paranormal means. The cold reading process is shown to depend initially on the Barnum effect, the tendency for people to embrace generalized personality descriptions as idiosyncratically their own. Psychological research into the Barnum effect is critically reviewed, and uses of the effect by a professional magician are described. This is followed by detailed analysis of the cold reading performances of a spirit medium. Future research should investigate the degree to which cold readers may have convinced themselves that they actually possess psychic or paranormal abilities.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Computer simulations of phase field drops on super-hydrophobic surfaces
NASA Astrophysics Data System (ADS)
Fedeli, Livio
2017-09-01
We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.
NASA Technical Reports Server (NTRS)
Sohn, Byung-Ju; Smith, Eric A.
1993-01-01
The maximum entropy production principle suggested by Paltridge (1975) is applied to separating the satellite-determined required total transports into atmospheric and oceanic components. Instead of using the excessively restrictive equal energy dissipation hypothesis as a deterministic tool for separating transports between the atmosphere and ocean fluids, the satellite-inferred required 2D energy transports are imposed on Paltridge's energy balance model, which is then solved as a variational problem using the equal energy dissipation hypothesis only to provide an initial guess field. It is suggested that Southern Ocean transports are weaker than previously reported. It is argued that a maximum entropy production principle can serve as a governing rule on macroscale global climate, and, in conjunction with conventional satellite measurements of the net radiation balance, provides a means to decompose atmosphere and ocean transports from the total transport field.
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
Model based LV-reconstruction in bi-plane x-ray angiography
NASA Astrophysics Data System (ADS)
Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz
2005-04-01
Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
Chimpanzee choice rates in competitive games match equilibrium game theory predictions.
Martin, Christopher Flynn; Bhui, Rahul; Bossaerts, Peter; Matsuzawa, Tetsuro; Camerer, Colin
2014-06-05
The capacity for strategic thinking about the payoff-relevant actions of conspecifics is not well understood across species. We use game theory to make predictions about choices and temporal dynamics in three abstract competitive situations with chimpanzee participants. Frequencies of chimpanzee choices are extremely close to equilibrium (accurate-guessing) predictions, and shift as payoffs change, just as equilibrium theory predicts. The chimpanzee choices are also closer to the equilibrium prediction, and more responsive to past history and payoff changes, than two samples of human choices from experiments in which humans were also initially uninformed about opponent payoffs and could not communicate verbally. The results are consistent with a tentative interpretation of game theory as explaining evolved behavior, with the additional hypothesis that chimpanzees may retain or practice a specialized capacity to adjust strategy choice during competition to perform at least as well as, or better than, humans have.
NASA Astrophysics Data System (ADS)
Shafer, S. L.; Bartlein, P. J.
2017-12-01
The period from 15-10 ka was a time of rapid vegetation changes in North America. Continental ice sheets in northern North America were receding, exposing new habitat for vegetation, and regions distant from the ice sheets experienced equally large environmental changes. Northern hemisphere temperatures during this period were increasing, promoting transitions from cold-adapted to temperate plant taxa at mid-latitudes. Long, transient paleovegetation simulations can provide important information on vegetation responses to climate changes, including both the spatial dynamics and rates of species distribution changes over time. Paleovegetation simulations also can fill the spatial and temporal gaps in observed paleovegetation records (e.g., pollen data from lake sediments), allowing us to test hypotheses about past vegetation changes (e.g., the location of past refugia). We used the CCSM3 TraCE transient climate simulation as input for LPJ-GUESS, a general ecosystem model, to simulate vegetation changes from 15-10 ka for parts of western North America at mid-latitudes ( 35-55° N). For these simulations, LPJ-GUESS was parameterized to simulate key tree taxa for western North America (e.g., Pseudotsuga, Tsuga, Quercus, etc.). The CCSM3 TraCE transient climate simulation data were regridded onto a 10-minute grid of the study area. We analyzed the simulated spatial and temporal dynamics of these taxa and compared the simulated changes with observed paleovegetation changes recorded in pollen and plant macrofossil data (e.g., data from the Neotoma Paleoecology Database). In general, the LPJ-GUESS simulations reproduce the general patterns of paleovegetation responses to climate change, although the timing of some simulated vegetation changes do not match the observed paleovegetation record. We describe the areas and time periods with the greatest data-model agreement and disagreement, and discuss some of the strengths and weaknesses of the simulated climate and vegetation data. The magnitude and rate of the simulated past vegetation changes are compared with projected future vegetation changes for the region.
Moore, Page; Kusek, John; Barry, Michael
2014-01-01
Abstract Objectives: This report assesses participant perception of treatment assignment in a randomized, double-blind, placebo-controlled trial of saw palmetto for the treatment of benign prostatic hyperplasia (BCM). Design: Participants randomized to receive saw palmetto were instructed to take one 320 mg gelcap daily for the first 24 weeks, two 320 mg gelcaps daily for the second 24 weeks, and three 320 mg gelcaps daily for the third 24 weeks. Study participants assigned to placebo were instructed to take the same number of matching placebo gelcaps in each time period. At 24, 48, and 72 weeks postrandomization, the American Urological Association Symptom Index (AUA-SI) was administered and participants were asked to guess their treatment assignment. Settings: The study was conducted at 11 clinical centers in North America. Participants: Study participants were men, 45 years and older, with moderate to low severe BPH symptoms, randomized to saw palmetto (N=151) or placebo (N=155). Outcome measures: Treatment arms were compared with respect to the distribution of participant guesses of treatment assignment. Results: For participants assigned to saw palmetto, 22.5%, 24.7%, and 29.8% correctly thought they were taking saw palmetto, and 37.3%, 40.0%, and 44.4% incorrectly thought they were on placebo at 24, 48, and 72 weeks, respectively. For placebo participants, 21.8%, 27.4%, and 25.2% incorrectly thought they were on saw palmetto, and 41.6%, 39.9%, and 42.6% correctly thought they were on placebo at 24, 48, and 72 weeks, respectively. The treatment arms did not vary with respect to the distributions of participants who guessed they were on saw palmetto (p=0.823) or placebo (p=0.893). Participants who experienced an improvement in AUA-SI were 2.16 times more likely to think they were on saw palmetto. Conclusions: Blinding of treatment assignment was successful in this study. Improvement in BPH-related symptoms was associated with the perception that participants were taking saw palmetto. PMID:23383975
NASA Astrophysics Data System (ADS)
Tang, J.
2015-12-01
Permafrost thawing in high latitudes allows more soil organic carbon (SOC) to become hydrologically accessible. This can increase dissolved organic carbon (DOC) exports and carbon release to the atmosphere as CO2 and CH4, with a positive feedback to regional and global climate warming. However, this portion of carbon loss through DOC export is often neglected in ecosystem models. In this paper, we incorporate a set of DOC-related processes (DOC production, mineralization, diffusion, sorption-desorption and leaching) into an Arctic-enabled version of the dynamic ecosystem model LPJ-GUESS (LPJ-GUESS WHyMe) to mechanistically model the DOC export, and to link this flux to other ecosystem processes. The extended LPJ-GUESS WHyMe with these DOC processes is applied to the Stordalen catchment in northern Sweden. The relative importance of different DOC-related processes for mineral and peatland soils for this region have been explored at both monthly and annual scales based on a detailed variance-based Sobol sensitivity analysis. For mineral soils, the annual DOC export is dominated by DOC fluxes in snowmelt seasons and the peak in spring is related to the runoff passing through top organic rich layers. Two processes, DOC sorption-desorption and production, are found to contribute most to the annual variance in DOC export. For peatland soils, the DOC export during snowmelt seasons is constrained by frozen soils and the processes of DOC production and mineralization, determining the magnitudes of DOC desorption in snowmelt seasons as well as DOC sorption in the rest of months, play the most important role in annual variances of DOC export. Generally, the seasonality of DOC fluxes is closely correlated with runoff seasonality in this region. The current implementation has demonstrated that DOC-related processes in the framework of LPJ-GUESS WHyMe are at an appropriate level of complexity to represent the main mechanism of DOC dynamics in soils. The quantified contributions from different processes on DOC export dynamics could be further linked to the climate change, vegetation composition change and permafrost thawing in this region.
2011-03-10
This image shows NASAS Dawn scientists best guess to date of what the surface of the protoplanet Vesta might look like; it incorporates the best data on dimples and bulges from ground-based telescopes and NASA Hubble Space Telescope.
Close binding of identity and location in visual feature perception
NASA Technical Reports Server (NTRS)
Johnston, J. C.; Pashler, H.
1990-01-01
The binding of identity and location information in disjunctive feature search was studied. Ss searched a heterogeneous display for a color or a form target, and reported both target identity and location. To avoid better than chance guessing of target identity (by choosing the target less likely to have been seen), the difficulty of the two targets was equalized adaptively; a mathematical model was used to quantify residual effects. A spatial layout was used that minimized postperceptual errors in reporting location. Results showed strong binding of identity and location perception. After correction for guessing, no perception of identity without location was found. A weak trend was found for accurate perception of target location without identity. We propose that activated features generate attention-calling "interrupt" signals, specifying only location; attention then retrieves the properties at that location.
Cognitive Load Does Not Affect the Behavioral and Cognitive Foundations of Social Cooperation.
Mieth, Laura; Bell, Raoul; Buchner, Axel
2016-01-01
The present study serves to test whether the cognitive mechanisms underlying social cooperation are affected by cognitive load. Participants interacted with trustworthy-looking and untrustworthy-looking partners in a sequential Prisoner's Dilemma Game. Facial trustworthiness was manipulated to stimulate expectations about the future behavior of the partners which were either violated or confirmed by the partners' cheating or cooperation during the game. In a source memory test, participants were required to recognize the partners and to classify them as cheaters or cooperators. A multinomial model was used to disentangle item memory, source memory and guessing processes. We found an expectancy-congruent bias toward guessing that trustworthy-looking partners were more likely to be associated with cooperation than untrustworthy-looking partners. Source memory was enhanced for cheating that violated the participants' positive expectations about trustworthy-looking partners. We were interested in whether or not this expectancy-violation effect-that helps to revise unjustified expectations about trustworthy-looking partners-depends on cognitive load induced via a secondary continuous reaction time task. Although this secondary task interfered with working memory processes in a validation study, both the expectancy-congruent guessing bias as well as the expectancy-violation effect were obtained with and without cognitive load. These findings support the hypothesis that the expectancy-violation effect is due to a simple mechanism that does not rely on demanding elaborative processes. We conclude that most cognitive mechanisms underlying social cooperation presumably operate automatically so that they remain unaffected by cognitive load.
Integrating planning perception and action for informed object search.
Manso, Luis J; Gutierrez, Marco A; Bustos, Pablo; Bachiller, Pilar
2018-05-01
This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.
Cognitive Load Does Not Affect the Behavioral and Cognitive Foundations of Social Cooperation
Mieth, Laura; Bell, Raoul; Buchner, Axel
2016-01-01
The present study serves to test whether the cognitive mechanisms underlying social cooperation are affected by cognitive load. Participants interacted with trustworthy-looking and untrustworthy-looking partners in a sequential Prisoner’s Dilemma Game. Facial trustworthiness was manipulated to stimulate expectations about the future behavior of the partners which were either violated or confirmed by the partners’ cheating or cooperation during the game. In a source memory test, participants were required to recognize the partners and to classify them as cheaters or cooperators. A multinomial model was used to disentangle item memory, source memory and guessing processes. We found an expectancy-congruent bias toward guessing that trustworthy-looking partners were more likely to be associated with cooperation than untrustworthy-looking partners. Source memory was enhanced for cheating that violated the participants’ positive expectations about trustworthy-looking partners. We were interested in whether or not this expectancy-violation effect—that helps to revise unjustified expectations about trustworthy-looking partners—depends on cognitive load induced via a secondary continuous reaction time task. Although this secondary task interfered with working memory processes in a validation study, both the expectancy-congruent guessing bias as well as the expectancy-violation effect were obtained with and without cognitive load. These findings support the hypothesis that the expectancy-violation effect is due to a simple mechanism that does not rely on demanding elaborative processes. We conclude that most cognitive mechanisms underlying social cooperation presumably operate automatically so that they remain unaffected by cognitive load. PMID:27630597
Reliable Transition State Searches Integrated with the Growing String Method.
Zimmerman, Paul
2013-07-09
The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.
Remote sensing of atmospheric aerosols with the SPEX spectropolarimeter
NASA Astrophysics Data System (ADS)
van Harten, G.; Rietjens, J.; Smit, M.; Snik, F.; Keller, C. U.; di Noia, A.; Hasekamp, O.; Vonk, J.; Volten, H.
2013-12-01
Characterizing atmospheric aerosols is key to understanding their influence on climate through their direct and indirect radiative forcing. This requires long-term global coverage, at high spatial (~km) and temporal (~days) resolution, which can only be provided by satellite remote sensing. Aerosol load and properties such as particle size, shape and chemical composition can be derived from multi-wavelength radiance and polarization measurements of sunlight that is scattered by the Earth's atmosphere at different angles. The required polarimetric accuracy of ~10^(-3) is very challenging, particularly since the instrument is located on a rapidly moving platform. Our Spectropolarimeter for Planetary EXploration (SPEX) is based on a novel, snapshot spectral modulator, with the intrinsic ability to measure polarization at high accuracy. It exhibits minimal instrumental polarization and is completely solid-state and passive. An athermal set of birefringent crystals in front of an analyzer encodes the incoming linear polarization into a sinusoidal modulation in the intensity spectrum. Moreover, a dual beam implementation yields redundancy that allows for a mutual correction in both the spectrally and spatially modulated data to increase the measurement accuracy. A partially polarized calibration stimulus has been developed, consisting of a carefully depolarized source followed by tilted glass plates to induce polarization in a controlled way. Preliminary calibration measurements show an accuracy of SPEX of well below 10^(-3), with a sensitivity limit of 2*10^(-4). We demonstrate the potential of the SPEX concept by presenting retrievals of aerosol properties based on clear sky measurements using a prototype satellite instrument and a dedicated ground-based SPEX. The retrieval algorithm, originally designed for POLDER data, performs iterative fitting of aerosol properties and surface albedo, where the initial guess is provided by a look-up table. The retrieved aerosol properties, including aerosol optical thickness, single scattering albedo, size distribution and complex refractive index, will be compared with the on-site AERONET sun-photometer, lidar, particle counter and sizer, and PM10 and PM2.5 monitoring instruments. Retrievals of the aerosol layer height based on polarization measurements in the O2A absorption band will be compared with lidar profiles. Furthermore, the possibility of enhancing the retrieval accuracy by replacing the look-up table with a neural network based initial guess will be discussed, using retrievals from simulated ground-based data.
Impact of modellers' decisions on hydrological a priori predictions
NASA Astrophysics Data System (ADS)
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of added information. In this qualitative analysis of a statistically small number of predictions we learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan
1997-08-01
One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.
2011-03-10
This image incorporates the best data on dimples and bulges of the protoplanet Vesta from ground-based telescopes and NASA Hubble Space Telescope. This model of Vesta uses scientists best guess to date of what the surface might look like.
Children's Use of Context in Word Recognition: A Psycholinguistic Guessing Game.
ERIC Educational Resources Information Center
Schwantes, Frederick M.; And Others
1980-01-01
Two experiments were conducted to investigate the effect of varying the amount of preceding-sentence context upon the lexical decision speed of third- and sixth-grade and college-level students. (Author/MP)
Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2015-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.
Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2017-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329
Mineralogy and petrology of cretaceous subsurface lamproite sills, southeastern Kansas, USA
Cullers, R.L.; Dorais, M.J.; Berendsen, P.; Chaudhuri, Sambhudas
1996-01-01
Cores and cuttings of lamproite sills and host sedimentary country rocks in southeastern Kansas from up to 312 m depth were analyzed for major elements in whole rocks and minerals, certain trace elements in whole rocks (including the REE) and Sr isotopic composition of the whole rocks. The lamproites are ultrapotassic (K2O/Na2O = 2.0-19.9), alkalic [molecular (K2O/Na2O)/Al2O3 = 1.3-2.8], enriched in mantle-incompatible elements (light REE, Ba, Rb, Sr, Th, Hf, Ta) and have nearly homogeneous initial Sr isotopic compositions (0.707764-0.708114). These lamproites could have formed by variable degrees of partial melting of harzburgite country rock and cross-cutting veins composed of phlogopite, K-Ti richterite, titanite, diopside, K-Ti silicates, or K-Ba-phosphate under high H2O/CO2 ratios and reducing conditions. Variability in melting of veins and wall rock and variable composition of the metasomatized veins could explain the significantly different composition of the Kansas lamproites. Least squares fractionation models preclude the derivation of the Kansas lamproites by fractional crystallization from magmas similar in composition to higher silica phlogopite-sanidine lamproites some believe to be primary lamproite melts found elsewhere. In all but one case, least squares fractionation models also preclude the derivation of magmas similar in composition to any of the Kansas lamproites from one another. A magma similar in composition to the average composition of the higher SiO2 Ecco Ranch lamproite (237.5-247.5 m depth) could, however, have marginally crystallized about 12% richterite, 12% sanidine, 7% diopside and 6% phlogopite to produce the average composition of the Guess lamproite (305-312 m depth). Lamproite from the Ecco Ranch core is internally fractionated in K2O, Al2O3, Ba, MgO, Fe2O3, Co and Cr most likely by crystal accumulation-removal of ferromagnesian minerals and sanidine. In contrast, the Guess core (305-312 m depth) has little fractionation throughout most of the sill except in several narrow zones. Lamproite in the Guess core has large enrichments in TiO2, Ba, REE, Th, Ta and Sc and depletions in MgO, Cr, Co and Rb possibly concentrated in these narrow zones during the last dregs of crystallization of this magma. The Ecco Ranch sill did not show any evidence of loss of volatiles or soluble elements into the country rock. This contrasts to the previously studied, shallow Silver City lamproite which did apparently lose H2O-rich fluid to the country rock. Perhaps a greater confining pressure and lesser amount of H2O-rich fluid prevented it from escaping.
Siegel, E; Groleau, G; Reiner, B; Stair, T
1998-08-01
Radiographs are ordered and interpreted for immediate clinical decisions 24 hours a day by emergency physicians (EP's). The Joint Commission for Accreditation of Health Care Organizations requires that all these images be reviewed by radiologists and that there be some mechanism for quality improvement (QI) for discrepant readings. There must be a log of discrepancies and documentation of follow up activities, but this alone does not guarantee effective Q.I. Radiologists reviewing images from the previous day and night often must guess at the preliminary interpretation of the EP and whether follow up action is necessary. EP's may remain ignorant of the final reading and falsely assume the initial diagnosis and treatment were correct. Some hospitals use a paper system in which the EP writes a preliminary interpretation on the requisition slip, which will be available when the radiologist dictates the final reading. Some hospitals use a classification of discrepancies based on clinical import and urgency, and communicated to the EP on duty at the time of the official reading, but may not communicate discrepancies to the EP's who initial read the images. Our computerized radiology department and picture archiving and communications system have increased technologist and radiologist productivity, and decreased retakes and lost films. There are fewer face-to-face consultants of radiologists and clinicians, but more communication by telephone and electronic annotation of PACS images. We have integrated the QI process for emergency department (ED) images into the PACS, and gained advantages over the traditional discrepancy log. Requisitions including clinical indications are entered into the Hospital Information System and then appear on the PACS along with images on readings. The initial impression, time of review, and the initials of the EP are available to the radiologist dictating the official report. The radiologist decides if there is a discrepancy, and whether it is category I (potentially serious, needs immediate follow-up), category II (moderate risk, follow-up in one day), or category III (low risk, follow-up in several days). During the working day, the radiologist calls immediately for category I discrepancies. Those noted from the evening, night, or weekend before are called to the EP the next morning. All discrepancies with the preliminary interpretation are communicated to the EP and are kept in a computerized log for review by a radiologist at a weekly ED teaching conference. This system has reduced the need for the radiologist to ask or guess what the impression was in the ED the night before. It has reduced the variability in recording of impressions by EP's, in communication back from radiologists, in the clinical] follow-up made, and in the documentation of the whole QI process. This system ensures that EP's receive notification of their discrepant readings, and provides continuing education to all the EP's on interpreting images on their patients.
A Comparative Study of Different Deblurring Methods Using Filters
NASA Astrophysics Data System (ADS)
Srimani, P. K.; Kavitha, S.
2011-12-01
This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.
Carlis, John; Bruso, Kelsey
2012-03-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing.
Estimating the cost of compensating victims of medical negligence.
Fenn, P.; Hermans, D.; Dingwall, R.
1994-01-01
The current system in Britain for compensating victims of medical injury depends on an assessment of negligence. Despite the sporadic pressure on the government to adopt a "no fault" approach, such as exists in Sweden, the negligence system will probably remain for the immediate future. The cost of this system was estimated to be 52.3m pounds for England 1990-1. The problem for the future, however, is one of forecasting accuracy at provider level: too high a guess and current patient care will suffer; too low a guess and future patient care will suffer. The introduction of a mutual insurance scheme may not resolve these difficulties, as someone will have to set the rates. Moreover, the figures indicate that if a no fault scheme was introduced the cost might be four times that of the current system, depending on the type of scheme adopted. PMID:8081145
Event-related potential evidence suggesting voters remember political events that never happened
Federmeier, Kara D.; Gonsalves, Brian D.
2014-01-01
Voters tend to misattribute issue positions to political candidates that are consistent with their partisan affiliation, even though these candidates have never explicitly stated or endorsed such stances. The prevailing explanation in political science is that voters misattribute candidates’ issue positions because they use their political knowledge to make educated but incorrect guesses. We suggest that voter errors can also stem from a different source: false memories. The current study examined event-related potential (ERP) responses to misattributed and accurately remembered candidate issue information. We report here that ERP responses to misattributed information can elicit memory signals similar to that of correctly remembered old information—a pattern consistent with a false memory rather than educated guessing interpretation of these misattributions. These results suggest that some types of voter misinformation about candidates may be harder to correct than previously thought. PMID:23202775
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT
Bruso, Kelsey
2012-01-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n2) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923
Analyzing force concept inventory with item response theory
NASA Astrophysics Data System (ADS)
Wang, Jing; Bao, Lei
2010-10-01
Item response theory is a popular assessment method used in education. It rests on the assumption of a probability framework that relates students' innate ability and their performance on test questions. Item response theory transforms students' raw test scores into a scaled proficiency score, which can be used to compare results obtained with different test questions. The scaled score also addresses the issues of ceiling effects and guessing, which commonly exist in quantitative assessment. We used item response theory to analyze the force concept inventory (FCI). Our results show that item response theory can be useful for analyzing physics concept surveys such as the FCI and produces results about the individual questions and student performance that are beyond the capability of classical statistics. The theory yields detailed measurement parameters regarding the difficulty, discrimination features, and probability of correct guess for each of the FCI questions.
Recollective experience in odor recognition: influences of adult age and familiarity.
Larsson, Maria; Oberg, Christina; Bäckman, Lars
2006-01-01
We examined recollective experience in odor memory as a function of age, intention to learn, and familiarity. Young and older adults studied a set of familiar and unfamiliar odors with incidental or intentional encoding instructions. At recognition, participants indicated whether their response was based on explicit recollection (remembering), a feeling of familiarity (knowing), or guessing. The results indicated no age-related differences in the distribution of experiential responses for unfamiliar odors. By contrast, for familiar odors the young demonstrated more explicit recollection than the older adults, who produced more "know" and "guess" responses. Intention to learn was unrelated to recollective experience. In addition, the observed age differences in "remember" responses for familiar odors were eliminated when odor naming was statistically controlled. This suggests that age-related deficits in activating specific odor knowledge (i.e., odor names) play an important role for age differences in recollective experience of olfactory information.
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Zachary M.; Wendelberger, Joanne Roth
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less
Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.
van de Ven, Vincent G.; Tong, Frank; Sack, Alexander T.
2017-01-01
Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise. PMID:28384347
Global emissions of terpenoid VOCs from terrestrial vegetation in the last millennium.
Acosta Navarro, J C; Smolander, S; Struthers, H; Zorita, E; Ekman, A M L; Kaplan, J O; Guenther, A; Arneth, A; Riipinen, I
2014-06-16
We investigated the millennial variability (1000 A.D.-2000 A.D.) of global biogenic volatile organic compound (BVOC) emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene, and sesquiterpene, and Lund-Potsdam-Jena-General Ecosystem Simulator (LPJ-GUESS), for isoprene and monoterpenes. We found the millennial trends of global isoprene emissions to be mostly affected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission trends were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have significant short-term global effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr -1 (13% and 19% less than during 1750-1850 and 1000-1200, respectively), and LPJ-GUESS emissions were 323 TgC yr -1 (15% and 20% less than during 1750-1850 and 1000-1200, respectively). Monoterpene emissions were 89 TgC yr -1 (10% and 6% higher than during 1750-1850 and 1000-1200, respectively) in MEGAN, and 24 TgC yr -1 (2% higher and 5% less than during 1750-1850 and 1000-1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr -1 (10% and 4% higher than during 1750-1850 and 1000-1200, respectively). Although both models capture similar emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-02-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.
Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
Willis, Matt; Sun, Peiyuan; Wang, Jun
2013-01-01
Background Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the “turkers”. Objective To answer two research questions: (1) Is the turkers’ collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers’ demographic characteristics affect their performance in medical pictogram comprehension? Methods We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers’ guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers’ interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers’ demographic characteristics and their pictogram comprehension performance. Results The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response–based open-ended testing with local people. The turkers’ misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. Conclusions The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers’ misunderstandings overlap with those elicited from low-literate people. PMID:23732572
Crowdsourcing participatory evaluation of medical pictograms using Amazon Mechanical Turk.
Yu, Bei; Willis, Matt; Sun, Peiyuan; Wang, Jun
2013-06-03
Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the "turkers". To answer two research questions: (1) Is the turkers' collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers' demographic characteristics affect their performance in medical pictogram comprehension? We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers' guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers' interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers' demographic characteristics and their pictogram comprehension performance. The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response-based open-ended testing with local people. The turkers' misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers' misunderstandings overlap with those elicited from low-literate people.
Environmental aspects of health care in the Grampian NHS region and the place of telehealth
Wootton, Richard; Tait, Alex; Croft, Amanda
2010-01-01
Detailed information about the composition of the carbon footprint of the NHS in the Grampian health region, and in Scotland generally, is not available at present. Based on the limited information available, our best guess is that travel emissions in Grampian are substantial, perhaps 49,000 tonnes CO2 per year. This is equivalent to 233 million km of car travel per year. A well-established telemedicine network in the Grampian region, which saves over 2000 patient journeys a year from community hospitals, avoids about 260,000 km travel per year, or about 59 tonnes CO2 per year. Therefore using telehealth as it has been used historically (primarily to facilitate hospital-to-hospital interactions) seems unlikely to have a major environmental impact – although of course there may be other good reasons for persevering with conventional telehealth. On the other hand, telehealth might be useful in reducing staff travel and to a lesser extent, visitor travel. It looks particularly promising for reducing outpatient travel, where substantial carbon savings might be made by reconfiguring the way that certain services are provided. PMID:20511579
Improving precipitation measurement
NASA Astrophysics Data System (ADS)
Strangeways, Ian
2004-09-01
Although rainfall has been measured for centuries scientifically and in isolated brief episodes over millennia for agriculture, it is still not measured adequately even today for climatology, water resources, and other precise applications. This paper outlines the history of raingauges, their errors, and describes the field testing over 3 years of a first guess design for an aerodynamic rain collector proposed by Folland in 1988. Although shown to have aerodynamic advantage over a standard 5 gauge, the new rain collector was found to suffer from outsplash in heavy rain. To study this problem, and to derive general basic design rules for aerodynamic gauges, its performance was investigated in turbulent, real-world conditions rather than in the controlled and simplified environment of a wind tunnel or mathematical model as in the past. To do this, video records were made using thread tracers to indicate the path of the wind, giving new insight into the complex flow of natural wind around and within raingauges. A new design resulted, and 2 years of field testing have shown that the new gauge has good aerodynamic and evaporative characteristics and minimal outsplash, offering the potential for improved precipitation measurement.
Modelling Holocene peatland and permafrost dynamics with the LPJ-GUESS dynamic vegetation model
NASA Astrophysics Data System (ADS)
Chaudhary, Nitin; Miller, Paul A.; Smith, Benjamin
2016-04-01
Dynamic global vegetation models (DGVMs) are an important platform to study past, present and future vegetation patterns together with associated biogeochemical cycles and climate feedbacks (e.g. Sitch et al. 2008, Smith et al. 2001). However, very few attempts have been made to simulate peatlands using DGVMs (Kleinen et al. 2012, Tang et al. 2015, Wania et al. 2009a). In the present study, we have improved the peatland dynamics in the state-of-the-art dynamic vegetation model (LPJ-GUESS) in order to understand the long-term evolution of northern peatland ecosystems and to assess the effect of changing climate on peatland carbon balance. We combined a dynamic multi-layer approach (Frolking et al. 2010, Hilbert et al. 2000) with soil freezing-thawing functionality (Ekici et al. 2015, Wania et al. 2009a) in LPJ-GUESS. The new model is named LPJ-GUESS Peatland (LPJ-GUESS-P) (Chaudhary et al. in prep). The model was calibrated and tested at the sub-arctic mire in Stordalen, Sweden, and the model was able to capture the reported long-term vegetation dynamics and peat accumulation patterns in the mire (Kokfelt et al. 2010). For evaluation, the model was run at 13 grid points across a north to south transect in Europe. The modelled peat accumulation values were found to be consistent with the published data for each grid point (Loisel et al. 2014). Finally, a series of additional experiments were carried out to investigate the vulnerability of high-latitude peatlands to climate change. We find that the Stordalen mire will sequester more carbon in the future due to milder and wetter climate conditions, longer growing seasons, and the carbon fertilization effect. References: - Chaudhary et al. (in prep.). Modelling Holocene peatland and permafrost dynamics with the LPJ-GUESS dynamic vegetation model - Ekici A, et al. 2015. Site-level model intercomparison of high latitude and high altitude soil thermal dynamics in tundra and barren landscapes. The Cryosphere 9: 1343-1361. - Frolking S, Roulet NT, Tuittila E, Bubier JL, Quillet A, Talbot J, Richard PJH. 2010. A new model of Holocene peatland net primary production, decomposition, water balance, and peat accumulation. Earth Syst. Dynam., 1, 1-21, doi:10.5194/esd-1-1-2010, 2010. - Hilbert DW, Roulet N, Moore T. 2000. Modelling and analysis of peatlands as dynamical systems. Journal of Ecology 88: 230-242. - Kleinen T, Brovkin V, Schuldt RJ. 2012. A dynamic model of wetland extent and peat accumulation: results for the Holocene. Biogeosciences 9: 235-248. - Kokfelt U, Reuss N, Struyf E, Sonesson M, Rundgren M, Skog G, Rosen P, Hammarlund D. 2010. Wetland development, permafrost history and nutrient cycling inferred from late Holocene peat and lake sediment records in subarctic Sweden. Journal of Paleolimnology 44: 327-342. - Loisel J, et al. 2014. A database and synthesis of northern peatland soil properties and Holocene carbon and nitrogen accumulation. Holocene 24: 1028-1042. - Sitch S, et al. 2008. Evaluation of the terrestrial carbon cycle, future plant geography and climate-carbon cycle feedbacks using five Dynamic Global Vegetation Models (DGVMs). Global Change Biology 14: 2015-2039. - Smith B, Prentice IC, Sykes MT. 2001. Representation of vegetation dynamics in the modelling of terrestrial ecosystems: comparing two contrasting approaches within European climate space. Global Ecology and Biogeography 10: 621-637. - Tang J, et al. 2015. Carbon budget estimation of a subarctic catchment using a dynamic ecosystem model at high spatial resolution. Biogeosciences 12: 2791-2808. - Wania R, Ross I, Prentice IC. 2009a. Integrating peatlands and permafrost into a dynamic global vegetation model: 1. Evaluation and sensitivity of physical land surface processes. Global Biogeochemical Cycles 23.
Simulated Vesta from the South Pole
2011-03-10
This image shows the scientists best guess to date of what the surface of the protoplanet Vesta might look like from the south pole and incorporates the best data on dimples and bulges Vesta from ground-based telescopes and NASA Hubble Space Telescope.
ERIC Educational Resources Information Center
Brown, Dorothy F.
1988-01-01
A discussion of vocabulary development for intermediate and advanced students preparing for the Australian certification test for Teaching English as a Foreign Language focuses on nine areas: collocations, clines, clusters, cloze procedures, context, consultation or checking, cards, creativity, and guessing. (seven references) (LB)
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Origins and Destinations: Tracking Planet Composition through Planet Formation Simulations
NASA Astrophysics Data System (ADS)
Chance, Quadry; Ballard, Sarah
2018-01-01
There are now several thousand confirmed exoplanets, a number which far exceeds our resources to study them all in detail. In particular, planets around M dwarfs provide the best opportunity for in-depth study of their atmospheres by telescopes in the near future. The question of which M dwarf planets most merit follow-up resources is a pressing one, given that NASA’s TESS mission will soon find hundreds of such planets orbiting stars bright enough for both ground and spaced-based follow-up.Our work aims to predict the approximate composition of planets around these stars through n-body simulations of the last stage of planet formation. With a variety of initial disk conditions, we investigate how the relative abundances of both refractory and volatile compounds in the primordial planetesimals are mapped to the final planet outcomes. These predictions can serve to provide a basis for making an educated guess about (a) which planets to observe with precious resources like JWST and (b) how to identify them based on dynamical clues.
A General Approach to the Geostationary Transfer Orbit Mission Recovery
NASA Technical Reports Server (NTRS)
Faber, Nicolas; Aresini, Andrea; Wauthier, Pascal; Francken, Philippe
2007-01-01
This paper discusses recovery scenarios for geosynchronous satellites injected in a non-nominal orbit due to a launcher underperformance. The theory on minimum-fuel orbital transfers is applied to develop an operational tool capable to design a recovery mission. To obtain promising initial guesses for the recovery three complementary techniques are used: p-optimized impulse function contouring, a numerical impulse function minimization and the solutions to the switching equations. The tool evaluates the feasibility of a recovery with the on-board propellant of the spacecraft and performs the complete mission design. This design takes into account for various mission operational constraints such as e.g., the requirement of multiple finite-duration burns, third-body orbital perturbations, spacecraft attitude constraints and ground station visibility. In a final case study, we analyze the consequences of a premature breakdown of an upper rocket stage engine during injection on a geostationary transfer orbit, as well as the possible recovery solution with the satellite on-board propellant.
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chalise, Roshan, E-mail: plasma.roshan@gmail.com; Khanal, Raju
2015-11-15
We have developed a self-consistent 1d3v (one dimension in space and three dimension in velocity) Kinetic Trajectory Simulation (KTS) model, which can be used for modeling various situations of interest and yields results of high accuracy. Exact ion trajectories are followed, to calculate along them the ion distribution function, assuming an arbitrary injection ion distribution. The electrons, on the other hand, are assumed to have a cut-off Maxwellian velocity distribution at injection and their density distribution is obtained analytically. Starting from an initial guess, the potential profile is iterated towards the final time-independent self-consistent state. We have used it tomore » study plasma sheath region formed in presence of an oblique magnetic field. Our results agree well with previous works from other models, and hence, we expect our 1d3v KTS model to provide a basis for the studying of all types of magnetized plasmas, yielding more accurate results.« less
Larøi, Frank; D'Argembeau, Arnaud; Van der Linden, Martial
2006-12-01
Numerous studies suggest a cognitive bias for threat-related material in delusional ideation. However, few studies have examined this bias using a memory task. We investigated the influence of delusion-proneness on identity and expression memory for angry and happy faces. Participants high and low in delusion-proneness were presented with happy and angry faces and were later asked to recognise the same faces displaying a neutral expression. They also had to remember what the initial expressions of the faces had been. Remember/know/guess judgments were asked for both identity and expression memory. Results showed that delusion-prone participants better recognised the identity of angry faces compared to non-delusional participants. Also, this difference between the two groups was mainly due to a greater number of remember responses in delusion-prone participants. These findings extend previous studies by showing that delusions are associated with a memory bias for threat-related stimuli.
Applications of Monte Carlo method to nonlinear regression of rheological data
NASA Astrophysics Data System (ADS)
Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo
2018-02-01
In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.
Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Qin; Jiang, Xiao-Guo
2012-09-01
To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
NASA Astrophysics Data System (ADS)
Lynam, Alfred E.
2015-04-01
Multiple-satellite-aided capture is a -efficient technique for capturing a spacecraft into orbit at Jupiter. However, finding the times when the Galilean moons of Jupiter align such that three or four of them can be encountered in a single pass is difficult using standard astrodynamics algorithms such as Lambert's problem. In this paper, we present simple but powerful techniques that simplify the dynamics and geometry of the Galilean satellites so that many of these triple- and quadruple-satellite-aided capture sequences can be found quickly over an extended 60-year time period from 2020 to 2080. The techniques find many low-fidelity trajectories that could be used as initial guesses for future high-fidelity optimization. Results indicate the existence of approximately 3,100 unique triple-satellite-aided capture trajectories and 6 unique quadruple-satellite-aided capture trajectories during the 60-year time period. The entire search takes less than one minute of computational time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch
2015-06-28
We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less
Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].
Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N
2014-09-20
Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.
NASA Astrophysics Data System (ADS)
Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.
1999-04-01
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.
Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio;
2016-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
NASA Astrophysics Data System (ADS)
Li, Shuang; Zhu, Yongsheng; Wang, Yukai
2014-02-01
Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
Damage identification via asymmetric active magnetic bearing acceleration feedback control
NASA Astrophysics Data System (ADS)
Zhao, Jie; DeSmidt, Hans; Yao, Wei
2015-04-01
A Floquet-based damage detection methodology for cracked rotor systems is developed and demonstrated on a shaft-disk system. This approach utilizes measured changes in the system natural frequencies to estimate the severity and location of shaft structural cracks during operation. The damage detection algorithms are developed with the initial guess solved by least square method and iterative damage parameter vector by updating the eigenvector updating. Active Magnetic Bearing is introduced to break the symmetric structure of rotor system and the tuning range of proper stiffness/virtual mass gains is studied. The system model is built based on energy method and the equations of motion are derived by applying assumed modes method and Lagrange Principle. In addition, the crack model is based on the Strain Energy Release Rate (SERR) concept in fracture mechanics. Finally, the method is synthesized via harmonic balance and numerical examples for a shaft/disk system demonstrate the effectiveness in detecting both location and severity of the structural damage.
NASA Astrophysics Data System (ADS)
McElroy, Kenneth L., Jr.
1992-12-01
A method is presented for the determination of neutral gas densities in the ionosphere from rocket-borne measurements of UV atmospheric emissions. Computer models were used to calculate an initial guess for the neutral atmosphere. Using this neutral atmosphere, intensity profiles for the N2 (0,5) Vegard-Kaplan band, the N2 Lyman-Birge-Hopfield band system, and the OI2972 A line were calculated and compared with the March 1990 NPS MUSTANG data. The neutral atmospheric model was modified and the intensity profiles recalculated until a fit with the data was obtained. The neutral atmosphere corresponding to the intensity profile that fit the data was assumed to be the atmospheric composition prevailing at the time of the observation. The ion densities were then calculated from the neutral atmosphere using a photochemical model. The electron density profile calculated by this model was compared with the electron density profile measured by the U.S. Air Force Geophysics Laboratory at a nearby site.
Why the MDGs need good governance in pharmaceutical systems to promote global health.
Kohler, Jillian Clare; Mackey, Tim Ken; Ovtcharenko, Natalia
2014-01-21
Corruption in the health sector can hurt health outcomes. Improving good governance can in turn help prevent health-related corruption. We understand good governance as having the following characteristics: it is consensus-oriented, accountable, transparent, responsive, equitable and inclusive, effective and efficient, follows the rule of law, is participatory and should in theory be less vulnerable to corruption. By focusing on the pharmaceutical system, we explore some of the key lessons learned from existing initiatives in good governance. As the development community begins to identify post-2015 Millennium Development Goals targets, it is essential to evaluate programs in good governance in order to build on these results and establish sustainable strategies. This discussion on the pharmaceutical system illuminates why. Considering pharmaceutical governance initiatives such as those launched by the World Bank, World Health Organization, and the Global Fund, we argue that country ownership of good governance initiatives is essential but also any initiative must include the participation of impartial stakeholders. Understanding the political context of any initiative is also vital so that potential obstacles are identified and the design of any initiative is flexible enough to make adjustments in programming as needed. Finally, the inherent challenge which all initiatives face is adequately measuring outcomes from any effort. However in fairness, determining the precise relationship between good governance and health outcomes is rarely straightforward. Challenges identified in pharmaceutical governance initiatives manifest in different forms depending on the nature and structure of the initiative, but their regular occurrence and impact on population-based health demonstrates growing importance of addressing pharmaceutical governance as a key component of the post-2015 Millennium Development Goals. Specifically, these challenges need to be acknowledged and responded to with global cooperation and innovation to establish localized and evidence-based metrics for good governance to promote global pharmaceutical safety.
Why the MDGs need good governance in pharmaceutical systems to promote global health
2014-01-01
Background Corruption in the health sector can hurt health outcomes. Improving good governance can in turn help prevent health-related corruption. We understand good governance as having the following characteristics: it is consensus-oriented, accountable, transparent, responsive, equitable and inclusive, effective and efficient, follows the rule of law, is participatory and should in theory be less vulnerable to corruption. By focusing on the pharmaceutical system, we explore some of the key lessons learned from existing initiatives in good governance. As the development community begins to identify post-2015 Millennium Development Goals targets, it is essential to evaluate programs in good governance in order to build on these results and establish sustainable strategies. This discussion on the pharmaceutical system illuminates why. Discussion Considering pharmaceutical governance initiatives such as those launched by the World Bank, World Health Organization, and the Global Fund, we argue that country ownership of good governance initiatives is essential but also any initiative must include the participation of impartial stakeholders. Understanding the political context of any initiative is also vital so that potential obstacles are identified and the design of any initiative is flexible enough to make adjustments in programming as needed. Finally, the inherent challenge which all initiatives face is adequately measuring outcomes from any effort. However in fairness, determining the precise relationship between good governance and health outcomes is rarely straightforward. Summary Challenges identified in pharmaceutical governance initiatives manifest in different forms depending on the nature and structure of the initiative, but their regular occurrence and impact on population-based health demonstrates growing importance of addressing pharmaceutical governance as a key component of the post-2015 Millennium Development Goals. Specifically, these challenges need to be acknowledged and responded to with global cooperation and innovation to establish localized and evidence-based metrics for good governance to promote global pharmaceutical safety. PMID:24447600
Spectral edge: gradient-preserving spectral mapping for image fusion.
Connah, David; Drew, Mark S; Finlayson, Graham D
2015-12-01
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
Setting the scene for SWOT: global maps of river reach hydrodynamic variables
NASA Astrophysics Data System (ADS)
Schumann, Guy J.-P.; Durand, Michael; Pavelsky, Tamlin; Lion, Christine; Allen, George
2017-04-01
Credible and reliable characterization of discharge from the Surface Water and Ocean Topography (SWOT) mission using the Manning-based algorithms needs a prior estimate constraining reach-scale channel roughness, base flow and river bathymetry. For some places, any one of those variables may exist locally or even regionally as a measurement, which is often only at a station, or sometimes as a basin-wide model estimate. However, to date none of those exist at the scale required for SWOT and thus need to be mapped at a continental scale. The prior estimates will be employed for producing initial discharge estimates, which will be used as starting-guesses for the various Manning-based algorithms, to be refined using the SWOT measurements themselves. A multitude of reach-scale variables were derived, including Landsat-based width, SRTM slope and accumulation area. As a possible starting point for building the prior database of low flow, river bathymetry and channel roughness estimates, we employed a variety of sources, including data from all GRDC records, simulations from the long-time runs of the global water balance model (WBM), and reach-based calculations from hydraulic geometry relationships as well as Manning's equation. Here, we present the first global maps of this prior database with some initial validation, caveats and prospective uses.
A neutron spectrometer based on temperature variations in superheated drop compositions
NASA Astrophysics Data System (ADS)
Apfel, Robert E.; d'Errico, Francesco
2002-01-01
The response of superheated drop detectors (SDDs) to neutron radiation varies in a self-consistent manner with variations in temperature and pressure, making such compositions suitable for neutron spectrometry. The advantage of this approach is that the response functions of candidate materials versus energy as the temperature or pressure is varied are nested and have distinct thresholds, with no thermal neutron response. These characteristics permit unfolding without the uncertainties associated with other spectrometry techniques, where multiple solutions are possible, thus requiring an initial guess of the spectrum. A spectrometer was developed based on the well-established technology for acoustic sensing of bubble events interfaced with a proportional-integral-derivative temperature controller. The active monitor for neutrons, called REMbrandt™, was used as the platform for controlling temperature on a SDD probe and for data acquisition, thereby automating the process of measuring the neutron energy spectrum. The new instrument, called REM-SPEC™, implements and automates the original BINS approach: it adjusts the temperature of the SDD vial in increasing steps and measures the bubble event rate at each step. By using two distinct SDD materials with overlapping responses, the 0.1-20 MeV range of energies relevant to practical spectrometry is readily covered. Initial experiments with an Am-Be source validate the operational protocols of this device.
ERIC Educational Resources Information Center
DeRosa, Bill
1988-01-01
Provides a game to help develop the skill of estimating and making educated guesses. Uses facts about cows to explain some problems associated with the dairy industry. Includes cards and rules for playing, class adaptation procedures, follow-up activities, and availability of background information on humane concerns. (RT)
The Scaling of Sociometric Nominations.
ERIC Educational Resources Information Center
Veldman, Donald J.; Sheffield, John R.
1979-01-01
A sociometric nominations instrument called Guess Who was administered to 13,045 elementary school children and then subjected to an image analysis. Four factors were extracted--disruptive, bright, dull, and quiet/well-behaved--and related to teacher ratings, self-reports and other measures. (Author/JKS)
Catalyzing Genetic Thinking in Undergraduate Mathematics Education
ERIC Educational Resources Information Center
King, Samuel Olugbenga
2016-01-01
In undergraduate mathematics education, atypical problem solving approaches are usually discouraged because they are not adaptive to systematic deduction on which undergraduate instructional systems are predicated. I present preliminary qualitative research evidence that indicates that these atypical approaches, such as genetic guessing, which…
ERIC Educational Resources Information Center
Chilcote, Elinor; And Others
1975-01-01
Games and activities, which are fun, practical, and related to the child's world, are presented. Suggestions are given for building skills in estimating lengths, guessing how many, recognizing patterns in counting, multiplying with "waffles," classifying by attributes, and adding and subtracting with special cards, relays, and play…
Direct and indirect capture of near-Earth asteroids in the Earth-Moon system
NASA Astrophysics Data System (ADS)
Tan, Minghu; McInnes, Colin; Ceriotti, Matteo
2017-09-01
Near-Earth asteroids have attracted attention for both scientific and commercial mission applications. Due to the fact that the Earth-Moon L1 and L2 points are candidates for gateway stations for lunar exploration, and an ideal location for space science, capturing asteroids and inserting them into periodic orbits around these points is of significant interest for the future. In this paper, we define a new type of lunar asteroid capture, termed direct capture. In this capture strategy, the candidate asteroid leaves its heliocentric orbit after an initial impulse, with its dynamics modeled using the Sun-Earth-Moon restricted four-body problem until its insertion, with a second impulse, onto the L2 stable manifold in the Earth-Moon circular restricted three-body problem. A Lambert arc in the Sun-asteroid two-body problem is used as an initial guess and a differential corrector used to generate the transfer trajectory from the asteroid's initial obit to the stable manifold associated with Earth-Moon L2 point. Results show that the direct asteroid capture strategy needs a shorter flight time compared to an indirect asteroid capture, which couples capture in the Sun-Earth circular restricted three-body problem and subsequent transfer to the Earth-Moon circular restricted three-body problem. Finally, the direct and indirect asteroid capture strategies are also applied to consider capture of asteroids at the triangular libration points in the Earth-Moon system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bieler, Noah S.; Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch
2014-11-28
In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006–3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires “filling up”more » all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.« less
NASA Astrophysics Data System (ADS)
Bieler, Noah S.; Hünenberger, Philippe H.
2014-11-01
In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006-3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires "filling up" all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.
Global emissions of terpenoid VOCs from terrestrial vegetation in the last millennium
Acosta Navarro, J C; Smolander, S; Struthers, H; Zorita, E; Ekman, A M L; Kaplan, J O; Guenther, A; Arneth, A; Riipinen, I
2014-01-01
We investigated the millennial variability (1000 A.D.–2000 A.D.) of global biogenic volatile organic compound (BVOC) emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene, and sesquiterpene, and Lund-Potsdam-Jena-General Ecosystem Simulator (LPJ-GUESS), for isoprene and monoterpenes. We found the millennial trends of global isoprene emissions to be mostly affected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission trends were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have significant short-term global effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr−1 (13% and 19% less than during 1750–1850 and 1000–1200, respectively), and LPJ-GUESS emissions were 323 TgC yr−1(15% and 20% less than during 1750–1850 and 1000–1200, respectively). Monoterpene emissions were 89 TgC yr−1(10% and 6% higher than during 1750–1850 and 1000–1200, respectively) in MEGAN, and 24 TgC yr−1 (2% higher and 5% less than during 1750–1850 and 1000–1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr−1(10% and 4% higher than during 1750–1850 and 1000–1200, respectively). Although both models capture similar emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation. PMID:25866703
Application Of Multi-grid Method On China Seas' Temperature Forecast
NASA Astrophysics Data System (ADS)
Li, W.; Xie, Y.; He, Z.; Liu, K.; Han, G.; Ma, J.; Li, D.
2006-12-01
Correlation scales have been used in traditional scheme of 3-dimensional variational (3D-Var) data assimilation to estimate the background error covariance for the numerical forecast and reanalysis of atmosphere and ocean for decades. However there are still some drawbacks of this scheme. First, the correlation scales are difficult to be determined accurately. Second, the positive definition of the first-guess error covariance matrix cannot be guaranteed unless the correlation scales are sufficiently small. Xie et al. (2005) indicated that a traditional 3D-Var only corrects some certain wavelength errors and its accuracy depends on the accuracy of the first-guess covariance. And in general, short wavelength error can not be well corrected until long one is corrected and then inaccurate first-guess covariance may mistakenly take long wave error as short wave ones and result in erroneous analysis. For the purpose of quickly minimizing the errors of long and short waves successively, a new 3D-Var data assimilation scheme, called multi-grid data assimilation scheme, is proposed in this paper. By assimilating the shipboard SST and temperature profiles data into a numerical model of China Seas, we applied this scheme in two-month data assimilation and forecast experiment which ended in a favorable result. Comparing with the traditional scheme of 3D-Var, the new scheme has higher forecast accuracy and a lower forecast Root-Mean-Square (RMS) error. Furthermore, this scheme was applied to assimilate the SST of shipboard, AVHRR Pathfinder Version 5.0 SST and temperature profiles at the same time, and a ten-month forecast experiment on sea temperature of China Seas was carried out, in which a successful forecast result was obtained. Particularly, the new scheme is demonstrated a great numerical efficiency in these analyses.
Baethge, Christopher; Assall, Oliver P; Baldessarini, Ross J
2013-01-01
Blinding is an integral part of many randomized controlled trials (RCTs). However, both blinding and blinding assessment seem to be rarely documented in trial reports. Systematic review of articles on RCTs in schizophrenia and affective disorders research during 2000-2010. Among 2,467 publications, 61 (2.5%; 95% confidence interval: 1.9-3.1%) reported assessing participant, rater, or clinician blinding: 5/672 reports on schizophrenia (0.7%; 0.3-1.6%) and 33/1,079 (3.1%; 2.1-4.2%) on affective disorders, without significant trends across the decade. Rarely was blinding assessed at the beginning, in most studies assessment was at the end. Proportion of patients' and raters' correct guesses of study arm averaged 54.4 and 62.0% per study, with slightly more correct guesses in treatment arms than in placebo arms. Three fourths of responders correctly guessed that they received the active agent. Blinding assessment was more frequently reported in papers on psychotherapy and brain stimulation than on drug trials (5.1%, 1.7-11.9%, vs. 8.3%, 4.3-14.4%, vs. 2.1%, 1.5-2.8%). Lack of assessment of blinding was associated with: (a) positive findings, (b) full industrial sponsorship, and (c) diagnosis of schizophrenia. There was a moderate association of treatment success and blinding status of both trial participants (r = 0.51, p = 0.002) and raters (r = 0.55, p = 0.067). Many RCT reports did not meet CONSORT standards regarding documentation of persons blinded (60%) or of efforts to match interventions (50%). Recent treatment trials in major psychiatric disorders rarely reported on or evaluated blinding. We recommend routine documentation of blinding strategies in reports. Copyright © 2013 S. Karger AG, Basel.
Simple and Effective Algorithms: Computer-Adaptive Testing.
ERIC Educational Resources Information Center
Linacre, John Michael
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Backpocket: Activities for Nature Study.
ERIC Educational Resources Information Center
Hendry, Ian; And Others
1995-01-01
Leading naturalist-teachers share outdoor learning activities and techniques, including using binoculars as magnifiers, scavenger hunts, games such as "what's it called" and "I spy," insect study, guessing the age of trees by examining the bark, leading bird walks, exploring nature in the community, and enhancing nature hikes…
Sustainability - What are the Odds? Guessing the Future of our Environment, Economy, and Society
This article examines the concept of sustainability from a global perspective, describing how alternative futures might develop in the environmental, economic, and social dimensions. The alternatives to sustainability appear to be (a) a catastrophic failure of life support, econo...
The Cognitive Dimensions of Information Structures.
ERIC Educational Resources Information Center
Green, T. R. G.
1994-01-01
Describes a set of terms (viscosity, hidden dependencies, imposes guess-ahead, abstraction level, and secondary notation) intended as a set of discussion tools for nonspecialists to converse about the structural features of a range of information artifacts. Explains the terms using spreadsheets as an example. (SR)
ERIC Educational Resources Information Center
Parrone, Edward G.; Montalto, Michael P.
2008-01-01
The importance of athletic fields has increased in today's society because of the popularity of sporting events. As a result, education administrators face challenges when dealing with their athletic facilities. Decisionmakers constantly are being second-guessed in regard to outdated, overused facilities and lack of budget. In this article, the…
ERIC Educational Resources Information Center
Riendeay, Diane, Ed.
2013-01-01
Discrepant events are surprising occurrences that challenge learners' preconceptions. These events puzzle students because the results are contrary to what they believe should happen. Due to the unexpected outcome, students experience cognitive disequilibrium, and this often leads to a desire to solve the problem. Discrepant events are great…
An improved authenticated key agreement protocol for telecare medicine information system.
Liu, Wenhao; Xie, Qi; Wang, Shengbao; Hu, Bin
2016-01-01
In telecare medicine information systems (TMIS), identity authentication of patients plays an important role and has been widely studied in the research field. Generally, it is realized by an authenticated key agreement protocol, and many such protocols were proposed in the literature. Recently, Zhang et al. pointed out that Islam et al.'s protocol suffers from the following security weaknesses: (1) Any legal but malicious patient can reveal other user's identity; (2) An attacker can launch off-line password guessing attack and the impersonation attack if the patient's identity is compromised. Zhang et al. also proposed an improved authenticated key agreement scheme with privacy protection for TMIS. However, in this paper, we point out that Zhang et al.'s scheme cannot resist off-line password guessing attack, and it fails to provide the revocation of lost/stolen smartcard. In order to overcome these weaknesses, we propose an improved protocol, the security and authentication of which can be proven using applied pi calculus based formal verification tool ProVerif.
Aligning Spinoza with Descartes: An informed Cartesian account of the truth bias.
Street, Chris N H; Kingstone, Alan
2017-08-01
There is a bias towards believing information is true rather than false. The Spinozan account claims there is an early, automatic bias towards believing. Only afterwards can people engage in an effortful re-evaluation and disbelieve the information. Supporting this account, there is a greater bias towards believing information is true when under cognitive load. However, developing on the Adaptive Lie Detector (ALIED) theory, the informed Cartesian can equally explain this data. The account claims the bias under load is not evidence of automatic belief; rather, people are undecided, but if forced to guess they can rely on context information to make an informed judgement. The account predicts, and we found, that if people can explicitly indicate their uncertainty, there should be no bias towards believing because they are no longer required to guess. Thus, we conclude that belief formation can be better explained by an informed Cartesian account - an attempt to make an informed judgment under uncertainty. © 2016 The British Psychological Society.
Serial consolidation of orientation information into visual short-term memory.
Liu, Taosheng; Becker, Mark W
2013-06-01
Previous research suggests that there is a limit to the rate at which items can be consolidated in visual short-term memory (VSTM). This limit could be due to either a serial or a limited-capacity parallel process. Historically, it has proven difficult to distinguish between these two types of processes. In the present experiment, we took a novel approach that allowed us to do so. Participants viewed two oriented gratings either sequentially or simultaneously and reported one of the gratings' orientation via method of adjustment. Performance was worse for the simultaneous than for the sequential condition. We fit the data with a mixture model that assumes performance is limited by a noisy memory representation plus random guessing. Critically, the serial and limited-capacity parallel processes made distinct predictions regarding the model's guessing and memory-precision parameters. We found strong support for a serial process, which implies that one can consolidate only a single orientation into VSTM at a time.
Burleson, Kathryn M; Olimpo, Jeffrey T
2016-06-01
The sheer amount of terminology and conceptual knowledge required for anatomy and physiology can be overwhelming for students. Educational games are one approach to reinforce such knowledge. In this activity, students worked collaboratively to review anatomy and physiology concepts by creating arrays of descriptive tiles to define a term. Once guessed, students located the structure or process within diagrams of the body. The game challenged students to think about course vocabulary in novel ways and to use their collective knowledge to get their classmates to guess the terms. Comparison of pretest/posttest/delayed posttest data revealed that students achieved statistically significant learning gains for each unit after playing the game, and a survey of student perceptions demonstrated that the game was helpful for learning vocabulary as well as fun to play. The game is easily adaptable for a variety of lower- and upper-division courses. Copyright © 2016 The American Physiological Society.
Solving the Swath Segment Selection Problem
NASA Technical Reports Server (NTRS)
Knight, Russell; Smith, Benjamin
2006-01-01
Several artificial-intelligence search techniques have been tested as means of solving the swath segment selection problem (SSSP) -- a real-world problem that is not only of interest in its own right, but is also useful as a test bed for search techniques in general. In simplest terms, the SSSP is the problem of scheduling the observation times of an airborne or spaceborne synthetic-aperture radar (SAR) system to effect the maximum coverage of a specified area (denoted the target), given a schedule of downlinks (opportunities for radio transmission of SAR scan data to a ground station), given the limit on the quantity of SAR scan data that can be stored in an onboard memory between downlink opportunities, and given the limit on the achievable downlink data rate. The SSSP is NP complete (short for "nondeterministic polynomial time complete" -- characteristic of a class of intractable problems that can be solved only by use of computers capable of making guesses and then checking the guesses in polynomial time).
Retrieved Products from Simulated Hyperspectral Observations of a Hurricane
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John
2015-01-01
Retrievals were run using the AIRS Science Team Version-6 AIRS Only retrieval algorithm, which generates a Neural-Net first guess (T(sub s))(sup 0), (T(p))(sup 0), and (q(p))(sup 0) as a function of observed AIRS radiances. AIRS Science Team Neural-Net coefficients performed very well beneath 300 mb using the simulated radiances. This means the simulated radiances are very realistic. First guess and retrieved values of T(p) above 300 mb were biased cold, but both represented the model spatial structure very well. QC'd T(p) and q(p) retrievals for all experiments had similar accuracies compared to their own truth fields, and were roughly consistent with results obtained using real data. Spatial coverage of retrievals, as well as the representativeness of the spatial structure of the storm, improved dramatically with decreasing size of the instrument's FOV. We sent QC'd values of T(p) and q(p) to Bob Atlas at AOML for use as input to OSSE Data Assimilation experiments.
The speed of metacognition: taking time to get to know one's structural knowledge.
Mealor, Andy D; Dienes, Zoltan
2013-03-01
The time course of different metacognitive experiences of knowledge was investigated using artificial grammar learning. Experiment 1 revealed that when participants are aware of the basis of their judgments (conscious structural knowledge) decisions are made most rapidly, followed by decisions made with conscious judgment but without conscious knowledge of underlying structure (unconscious structural knowledge), and guess responses (unconscious judgment knowledge) were made most slowly, even when controlling for differences in confidence and accuracy. In experiment 2, short response deadlines decreased the accuracy of unconscious but not conscious structural knowledge. Conversely, the deadline decreased the proportion of conscious structural knowledge in favour of guessing. Unconscious structural knowledge can be applied rapidly but becomes more reliable with additional metacognitive processing time whereas conscious structural knowledge is an all-or-nothing response that cannot always be applied rapidly. These dissociations corroborate quite separate theories of recognition (dual-process) and metacognition (higher order thought and cross-order integration). Copyright © 2012 Elsevier Inc. All rights reserved.
Failure of self-consistency in the discrete resource model of visual working memory.
Bays, Paul M
2018-06-03
The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.
MWR3C physical retrievals of precipitable water vapor and cloud liquid water path
Cadeddu, Maria
2016-10-12
The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.
Analysis and use of VAS satellite data
NASA Technical Reports Server (NTRS)
Fuelberg, Henry E.; Andrews, Mark J.; Beven, John L., II; Moore, Steven R.; Muller, Bradley M.
1989-01-01
Four interrelated investigations have examined the analysis and use of VAS satellite data. A case study of VAS-derived mesoscale stability parameters suggested that they would have been a useful supplement to conventional data in the forecasting of thunderstorms on the day of interest. A second investigation examined the roles of first guess and VAS radiometric data in producing sounding retrievals. Broad-scale patterns of the first guess, radiances, and retrievals frequently were similar, whereas small-scale retrieval features, especially in the dew points, were often of uncertain origin. Two research tasks considered 6.7 micron middle tropospheric water vapor imagery. The first utilized radiosonde data to examine causes for two areas of warm brightness temperature. Subsidence associated with a translating jet streak was important. The second task involving water vapor imagery investigated simulated imagery created from LAMPS output and a radiative transfer algorithm. Simulated image patterns were found to compare favorably with those actually observed by VAS. Furthermore, the mass/momentum fields from LAMPS were powerful tools for understanding causes for the image configurations.
CLAES Product Improvement by use of GSFC Data Assimilation System
NASA Technical Reports Server (NTRS)
Kumer, J. B.; Douglass, Anne (Technical Monitor)
2001-01-01
Recent development in chemistry transport models (CTM) and in data assimilation systems (DAS) indicate impressive predictive capability for the movement of airparcels and the chemistry that goes on within these. This project was aimed at exploring the use of this capability to achieve improved retrieval of geophysical parameters from remote sensing data. The specific goal was to improve retrieval of the CLAES CH4 data obtained during the active north high latitude dynamics event of 18 to 25 February 1992. The model capabilities would be used: (1) rather than climatology to improve on the first guess and the a-priori fields, and (2) to provide horizontal gradients to include in the retrieval forward model. The retrieval would be implemented with the first forward DAS prediction. The results would feed back to the DAS and a second DAS prediction for first guess, a-priori and gradients would feed to the retrieval. The process would repeat to convergence and then proceed to the next day.
Campo, Shelly; Lowe, John; Andsager, Julie; Morcuende, Jose A
2013-01-01
Background The Internet provides new opportunities for parents of children with difficult illnesses and disabilities to find information and support. The Internet is particularly important for caregivers of children with special needs due to numerous health-related decisions they face. For at-risk populations, online support communities can become key settings and channels for health promotion and communication. Objective This study is an initial exploration of the information-seeking and information-provision processes present in an online support community, which is an area of opportunity and interest for Internet-based medical research and practice. The aim of this study was to explore and describe information-related processes of uncertainty management in relationship to clubfoot. Specifically, the study explored interpersonal communication (information seeking and provision) in an online support community serving the needs of parents of children with clubfoot. Methods The study population consisted of messages posted to an online community by caregivers (parents) of children with clubfoot. The theoretical framework informing the study was the Uncertainty Management Theory (UMT). The study used content analysis to explore and categorize the content of 775 messages. Results Women authored 664 of 775 messages (86%) and men authored 47 messages (6%). Caregivers managed uncertainty through information seeking and provision behaviors that were dynamic and multilayered. The ratio of information-seeking messages to information-provision responses was 1 to 4. All five types of information-seeking behaviors proposed by Brashers’ schema were identified, most of them being correlated. Information seeking using direct questions was found to be positively correlated to self-disclosure (r=.538), offering of a candidate answer (r=.318), and passive information seeking (r=.253). Self-disclosure was found to be positively correlated to provision of a candidate answer (r=.324), second-guessing (r=.149), and passive information seeking (r=.366). Provision of a candidate answer was found to be positively correlated with second-guessing (r=.193) and passive information seeking (r=.223). Second-guessing was found to be positively correlated to passive information seeking (r=.311). All correlations reported above were statistically significant (P<0.01). Of the 775 messages analyzed, 255 (33%) identified a medical professional or institution by name. Detailed medical information was provided in 101 (13%) messages, with the main source of information identified being personal experience rather than medical sources. Conclusion Online communities can be an effective channel for caregivers, especially women, to seek and offer information required for managing clubfoot-related uncertainty. To enhance communication with parents, health care institutions may need to invest additional resources in user-friendly online information sources and online interactions with caregivers of children with special illnesses such as clubfoot. Furthermore, explorations of information-seeking and information-provision behaviors in online communities can provide valuable data for interdisciplinary health research and practice. PMID:23470259
NASA Astrophysics Data System (ADS)
Xanthopoulou, Themis; Ertsen, Maurits; Düring, Bleda; Kolen, Jan
2017-04-01
In the dry Southern Oman, more than a thousand years ago, a large water system that connected the mountain mass with the coastal region was constructed. Its length (up to 30 km) and the fact that the coastal region has a rich groundwater aquifer create confusion as to why the system was initially built. Nonetheless, it was abandoned a couple of centuries later only to be partially revived by small farming communities in the 17th to 18th century. The focus of our research is one of the irrigation systems that used the water conveyed from the large water system. Not much is known about these small irrigation systems functioning in the Wadi Al Jizzi of the greater Sohar region. There are no written records and we can only make guesses about the way the systems were managed based on ethnographical studies and the traditional Omani techniques. On the other hand, the good preservation state of the canals offers a great opportunity for hydraulic reconstruction of irrigation events. More than that, the material remains suggest and at the same time limit the ways in which humans interacted with the system and the water resources of the region. All irrigation activities and some daily activities had to be realized through the canal system and only if the canal system permits it these actions would have been feasible. We created a conceptual model of irrigation that includes the human agent and feedback mechanisms through hydraulics and then we simulated irrigation events using the Sobek software. Scenarios and sensibility analysis were used to address the unknown aspects of the system. Our research yielded insights about the way the farming community interacted with the larger water system, the levels of co-ordination and co-operation required for successful irrigation and the predisposition of conflict and power relations.
Spectral analysis of shielded gamma ray sources using precalculated library data
NASA Astrophysics Data System (ADS)
Holmes, Thomas Wesley; Gardner, Robin P.
2015-11-01
In this work, an approach has been developed for determining the intensity of a shielded source by first determining the thicknesses of three different shielding materials from a passively collected gamma-ray spectrum by making comparisons with predetermined shielded spectra. These evaluations are dependent on the accuracy and validity of the predetermined library spectra which were created by changing the thicknesses of the three chosen materials lead, aluminum and wood that are used to simulate any actual shielding. Each of the spectra produced was generated using MCNP5 with a sufficiently large number of histories to ensure a low relative error at each channel. The materials were held in the same respective order from source to detector, where each material consisted of three individual thicknesses and a null condition. This then produced two separate data sets of 27 total shielding material situations and subsequent predetermined libraries that were created for each radionuclide source used. The technique used to calculate the thicknesses of the materials implements a Levenberg-Marquardt nonlinear search that employs a tri-linear interpolation with the respective predetermined libraries within each channel for the supplied input unknown spectrum. Given that the nonlinear parameters require an initial guess for the calculations, the approach demonstrates first that when the correct values are input, the correct thicknesses are found. It then demonstrates that when multiple trials of random values are input for each of the nonlinear parameters, the average of the calculated solutions that successfully converges also produced the correct thicknesses. Under situations with sufficient information known about the detection situation at hand, the method was shown to behave in a manner that produces reasonable results and can serve as a good preliminary solution. This technique has the capability to be used in a variety of full spectrum inverse analysis problems including homeland security issues.
Testing of Strategies for the Acceleration of the Cost Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, Roberto; Vilim, Richard B.
The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) andmore » the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal one. The component physical dynamics are represented through suitable ramp constraints, which considerably simplify the numerical solving. In order to test the capabilities of the proposed approach, in the present report, the dispatch problem only is tackled, i.e. a reference unit configuration is assumed, and each one of the N-R HES unit components is assumed to have a fixed installed capacity. As for the next steps, the main improvement will concern the operation strategy of the ES facility. In particular, in order to describe a more realistic battery commitment strategy, the ES operation will be regulated according to the electricity price forecasts.« less
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
Unified Lambert Tool for Massively Parallel Applications in Space Situational Awareness
NASA Astrophysics Data System (ADS)
Woollands, Robyn M.; Read, Julie; Hernandez, Kevin; Probe, Austin; Junkins, John L.
2018-03-01
This paper introduces a parallel-compiled tool that combines several of our recently developed methods for solving the perturbed Lambert problem using modified Chebyshev-Picard iteration. This tool (unified Lambert tool) consists of four individual algorithms, each of which is unique and better suited for solving a particular type of orbit transfer. The first is a Keplerian Lambert solver, which is used to provide a good initial guess (warm start) for solving the perturbed problem. It is also used to determine the appropriate algorithm to call for solving the perturbed problem. The arc length or true anomaly angle spanned by the transfer trajectory is the parameter that governs the automated selection of the appropriate perturbed algorithm, and is based on the respective algorithm convergence characteristics. The second algorithm solves the perturbed Lambert problem using the modified Chebyshev-Picard iteration two-point boundary value solver. This algorithm does not require a Newton-like shooting method and is the most efficient of the perturbed solvers presented herein, however the domain of convergence is limited to about a third of an orbit and is dependent on eccentricity. The third algorithm extends the domain of convergence of the modified Chebyshev-Picard iteration two-point boundary value solver to about 90% of an orbit, through regularization with the Kustaanheimo-Stiefel transformation. This is the second most efficient of the perturbed set of algorithms. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver for solving multiple revolution perturbed transfers. This method does require "shooting" but differs from Newton-like shooting methods in that it does not require propagation of a state transition matrix. The unified Lambert tool makes use of the General Mission Analysis Tool and we use it to compute thousands of perturbed Lambert trajectories in parallel on the Space Situational Awareness computer cluster at the LASR Lab, Texas A&M University. We demonstrate the power of our tool by solving a highly parallel example problem, that is the generation of extremal field maps for optimal spacecraft rendezvous (and eventual orbit debris removal). In addition we demonstrate the need for including perturbative effects in simulations for satellite tracking or data association. The unified Lambert tool is ideal for but not limited to space situational awareness applications.
Mystery Boxes: Helping Children Improve Their Reasoning
ERIC Educational Resources Information Center
Rule, Audrey C.
2007-01-01
This guest editorial describes ways teachers can use guessing games about an unknown item in a "mystery box" to help children improve their abilities to listen to others, recall information, ask purposeful questions, classify items by class, make inferences, synthesize information, and draw conclusions. The author presents information…
ERIC Educational Resources Information Center
Schmidt, Pamela; Chadde, Joan Schumaker; Buenzli, Michael
2003-01-01
Insects can be useful for investigations because they are numerous, relatively easy to find, and fascinating to students. Most elementary students have limited understandings of what exactly becomes of insects during the winter, often guessing that insects must "go to sleep" or "they just die." In this winter activity, students learn about insect…
Producibility Engineering and Planning (PEP)
1977-01-01
Materiel System, May 1976. c. Cesare Raimondi, "Estimating Drafting Time - Art , Science , Guess- work", Machine Design, 7 September 1972. d. Current Wage...Comprehensive 8 16 24 32 40 86 45 70 90 80 1/ Cesare Raimondi, "Estimating Drafting Time- Art , Science , Guesswork," Machine Design, September