Simulations of Coulomb systems with slab geometry using an efficient 3D Ewald summation method
NASA Astrophysics Data System (ADS)
dos Santos, Alexandre P.; Girotto, Matheus; Levin, Yan
2016-04-01
We present a new approach to efficiently simulate electrolytes confined between infinite charged walls using a 3d Ewald summation method. The optimal performance is achieved by separating the electrostatic potential produced by the charged walls from the electrostatic potential of electrolyte. The electric field produced by the 3d periodic images of the walls is constant inside the simulation cell, with the field produced by the transverse images of the charged plates canceling out. The non-neutral confined electrolyte in an external potential can be simulated using 3d Ewald summation with a suitable renormalization of the electrostatic energy, to remove a divergence, and a correction that accounts for the conditional convergence of the resulting lattice sum. The new algorithm is at least an order of magnitude more rapid than the usual simulation methods for the slab geometry and can be further sped up by adopting a particle-particle particle-mesh approach.
Ewald method for polytropic potentials in arbitrary dimensionality
NASA Astrophysics Data System (ADS)
Osychenko, O. N.; Astrakharchik, G. E.; Boronat, J.
2012-02-01
The Ewald summation technique is generalized to power-law 1/| r | k potentials in three-, two- and one-dimensional geometries with explicit formulae for all the components of the sums. The cases of short-range, long-range and 'marginal' interactions are treated separately. The jellium model, as a particular case of a charge-neutral system, is discussed and the explicit forms of the Ewald sums for such a system are presented. A generalized form of the Ewald sums for a non-cubic (non-square) simulation cell for three- (two-) dimensional geometry is obtained and its possible field of application is discussed. A procedure for the optimization of the involved parameters in actual simulations is developed and an example of its application is presented.
Ewald Electrostatics for Mixtures of Point and Continuous Line Charges.
Antila, Hanne S; Tassel, Paul R Van; Sammalkorpi, Maria
2015-10-15
Many charged macro- or supramolecular systems, such as DNA, are approximately rod-shaped and, to the lowest order, may be treated as continuous line charges. However, the standard method used to calculate electrostatics in molecular simulation, the Ewald summation, is designed to treat systems of point charges. We extend the Ewald concept to a hybrid system containing both point charges and continuous line charges. We find the calculated force between a point charge and (i) a continuous line charge and (ii) a discrete line charge consisting of uniformly spaced point charges to be numerically equivalent when the separation greatly exceeds the discretization length. At shorter separations, discretization induces deviations in the force and energy, and point charge-point charge correlation effects. Because significant computational savings are also possible, the continuous line charge Ewald method presented here offers the possibility of accurate and efficient electrostatic calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojeda-May, Pedro; Pu, Jingzhi, E-mail: jpu@iupui.edu
The Wolf summation approach [D. Wolf et al., J. Chem. Phys. 110, 8254 (1999)], in the damped shifted force (DSF) formalism [C. J. Fennell and J. D. Gezelter, J. Chem. Phys. 124, 234104 (2006)], is extended for treating electrostatics in combined quantum mechanical and molecular mechanical (QM/MM) molecular dynamics simulations. In this development, we split the QM/MM electrostatic potential energy function into the conventional Coulomb r{sup −1} term and a term that contains the DSF contribution. The former is handled by the standard machinery of cutoff-based QM/MM simulations whereas the latter is incorporated into the QM/MM interaction Hamiltonian as amore » Fock matrix correction. We tested the resulting QM/MM-DSF method for two solution-phase reactions, i.e., the association of ammonium and chloride ions and a symmetric SN{sub 2} reaction in which a methyl group is exchanged between two chloride ions. The performance of the QM/MM-DSF method was assessed by comparing the potential of mean force (PMF) profiles with those from the QM/MM-Ewald and QM/MM-isotropic periodic sum (IPS) methods, both of which include long-range electrostatics explicitly. For ion association, the QM/MM-DSF method successfully eliminates the artificial free energy drift observed in the QM/MM-Cutoff simulations, in a remarkable agreement with the two long-range-containing methods. For the SN{sub 2} reaction, the free energy of activation obtained by the QM/MM-DSF method agrees well with both the QM/MM-Ewald and QM/MM-IPS results. The latter, however, requires a greater cutoff distance than QM/MM-DSF for a proper convergence of the PMF. Avoiding time-consuming lattice summation, the QM/MM-DSF method yields a 55% reduction in computational cost compared with the QM/MM-Ewald method. These results suggest that, in addition to QM/MM-IPS, the QM/MM-DSF method may serve as another efficient and accurate alternative to QM/MM-Ewald for treating electrostatics in condensed-phase simulations of chemical reactions.« less
Ojeda-May, Pedro; Pu, Jingzhi
2015-11-07
The Wolf summation approach [D. Wolf et al., J. Chem. Phys. 110, 8254 (1999)], in the damped shifted force (DSF) formalism [C. J. Fennell and J. D. Gezelter, J. Chem. Phys. 124, 234104 (2006)], is extended for treating electrostatics in combined quantum mechanical and molecular mechanical (QM/MM) molecular dynamics simulations. In this development, we split the QM/MM electrostatic potential energy function into the conventional Coulomb r(-1) term and a term that contains the DSF contribution. The former is handled by the standard machinery of cutoff-based QM/MM simulations whereas the latter is incorporated into the QM/MM interaction Hamiltonian as a Fock matrix correction. We tested the resulting QM/MM-DSF method for two solution-phase reactions, i.e., the association of ammonium and chloride ions and a symmetric SN2 reaction in which a methyl group is exchanged between two chloride ions. The performance of the QM/MM-DSF method was assessed by comparing the potential of mean force (PMF) profiles with those from the QM/MM-Ewald and QM/MM-isotropic periodic sum (IPS) methods, both of which include long-range electrostatics explicitly. For ion association, the QM/MM-DSF method successfully eliminates the artificial free energy drift observed in the QM/MM-Cutoff simulations, in a remarkable agreement with the two long-range-containing methods. For the SN2 reaction, the free energy of activation obtained by the QM/MM-DSF method agrees well with both the QM/MM-Ewald and QM/MM-IPS results. The latter, however, requires a greater cutoff distance than QM/MM-DSF for a proper convergence of the PMF. Avoiding time-consuming lattice summation, the QM/MM-DSF method yields a 55% reduction in computational cost compared with the QM/MM-Ewald method. These results suggest that, in addition to QM/MM-IPS, the QM/MM-DSF method may serve as another efficient and accurate alternative to QM/MM-Ewald for treating electrostatics in condensed-phase simulations of chemical reactions.
NASA Astrophysics Data System (ADS)
Ojeda-May, Pedro; Pu, Jingzhi
2015-11-01
The Wolf summation approach [D. Wolf et al., J. Chem. Phys. 110, 8254 (1999)], in the damped shifted force (DSF) formalism [C. J. Fennell and J. D. Gezelter, J. Chem. Phys. 124, 234104 (2006)], is extended for treating electrostatics in combined quantum mechanical and molecular mechanical (QM/MM) molecular dynamics simulations. In this development, we split the QM/MM electrostatic potential energy function into the conventional Coulomb r-1 term and a term that contains the DSF contribution. The former is handled by the standard machinery of cutoff-based QM/MM simulations whereas the latter is incorporated into the QM/MM interaction Hamiltonian as a Fock matrix correction. We tested the resulting QM/MM-DSF method for two solution-phase reactions, i.e., the association of ammonium and chloride ions and a symmetric SN2 reaction in which a methyl group is exchanged between two chloride ions. The performance of the QM/MM-DSF method was assessed by comparing the potential of mean force (PMF) profiles with those from the QM/MM-Ewald and QM/MM-isotropic periodic sum (IPS) methods, both of which include long-range electrostatics explicitly. For ion association, the QM/MM-DSF method successfully eliminates the artificial free energy drift observed in the QM/MM-Cutoff simulations, in a remarkable agreement with the two long-range-containing methods. For the SN2 reaction, the free energy of activation obtained by the QM/MM-DSF method agrees well with both the QM/MM-Ewald and QM/MM-IPS results. The latter, however, requires a greater cutoff distance than QM/MM-DSF for a proper convergence of the PMF. Avoiding time-consuming lattice summation, the QM/MM-DSF method yields a 55% reduction in computational cost compared with the QM/MM-Ewald method. These results suggest that, in addition to QM/MM-IPS, the QM/MM-DSF method may serve as another efficient and accurate alternative to QM/MM-Ewald for treating electrostatics in condensed-phase simulations of chemical reactions.
A Method to Compute Periodic Sums
2013-10-15
the absolute performance of the present meth- ods with the smooth particle mesh Ewald ( SPME ) and other algorithms for periodic summation due to a...can be done using published data [14] comparing perfor- mance of the SPME and FMM-type PWA implementation for clusters, for relatively small size
NASA Astrophysics Data System (ADS)
Yao, Yuan; Capecelatro, Jesse
2018-03-01
We present a numerical study on inertial electrically charged particles suspended in a turbulent carrier phase. Fluid-particle interactions are accounted for in an Eulerian-Lagrangian (EL) framework and coupled to a Fourier-based Ewald summation method, referred to as the particle-particle-particle-mesh (P3M ) method, to accurately capture short- and long-range electrostatic forces in a tractable manner. The EL P3M method is used to assess the competition between drag and Coulomb forces for a range of Stokes numbers and charge densities. Simulations of like- and oppositely charged particles suspended in a two-dimensional Taylor-Green vortex and three-dimensional homogeneous isotropic turbulence are reported. It is found that even in dilute suspensions, the short-range electric potential plays an important role in flows that admit preferential concentration. Suspensions of oppositely charged particles are observed to agglomerate in the form of chains and rings. Comparisons between the particle-mesh method typically employed in fluid-particle calculations and P3M are reported, in addition to one-point and two-point statistics to quantify the level of clustering as a function of Reynolds number, Stokes number, and nondimensional electric settling velocity.
Chialvo, Ariel A.; Vlcek, Lukas
2014-11-01
We present a detailed derivation of the complete set of expressions required for the implementation of an Ewald summation approach to handle the long-range electrostatic interactions of polar and ionic model systems involving Gaussian charges and induced dipole moments with a particular application to the isobaricisothermal molecular dynamics simulation of our Gaussian Charge Polarizable (GCP) water model and its extension to aqueous electrolytes solutions. The set comprises the individual components of the potential energy, electrostatic potential, electrostatic field and gradient, the electrostatic force and the corresponding virial. Moreover, we show how the derived expressions converge to known point-based electrostatic counterpartsmore » when the parameters, defining the Gaussian charge and induced-dipole distributions, are extrapolated to their limiting point values. Finally, we illustrate the Ewald implementation against the current reaction field approach by isothermal-isobaric molecular dynamics of ambient GCP water for which we compared the outcomes of the thermodynamic, microstructural, and polarization behavior.« less
Fukuda, Ikuo
2013-11-07
The zero-multipole summation method has been developed to efficiently evaluate the electrostatic Coulombic interactions of a point charge system. This summation prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large amounts of energetic noise and significant artifacts. The resulting energy function is represented by a constant term plus a simple pairwise summation, using a damped or undamped Coulombic pair potential function along with a polynomial of the distance between each particle pair. Thus, the implementation is straightforward and enables facile applications to high-performance computations. Any higher-order multipole moment can be taken into account in the neutrality principle, and it only affects the degree and coefficients of the polynomial and the constant term. The lowest and second moments correspond respectively to the Wolf zero-charge scheme and the zero-dipole summation scheme, which was previously proposed. Relationships with other non-Ewald methods are discussed, to validate the current method in their contexts. Good numerical efficiencies were easily obtained in the evaluation of Madelung constants of sodium chloride and cesium chloride crystals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Zachary C.; Richard, Ryan M.; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu
2013-12-28
An implementation of Ewald summation for use in mixed quantum mechanics/molecular mechanics (QM/MM) calculations is presented, which builds upon previous work by others that was limited to semi-empirical electronic structure for the QM region. Unlike previous work, our implementation describes the wave function's periodic images using “ChElPG” atomic charges, which are determined by fitting to the QM electrostatic potential evaluated on a real-space grid. This implementation is stable even for large Gaussian basis sets with diffuse exponents, and is thus appropriate when the QM region is described by a correlated wave function. Derivatives of the ChElPG charges with respect tomore » the QM density matrix are a potentially serious bottleneck in this approach, so we introduce a ChElPG algorithm based on atom-centered Lebedev grids. The ChElPG charges thus obtained exhibit good rotational invariance even for sparse grids, enabling significant cost savings. Detailed analysis of the optimal choice of user-selected Ewald parameters, as well as timing breakdowns, is presented.« less
Electrostatic interactions in soft particle systems: mesoscale simulations of ionic liquids.
Wang, Yong-Lei; Zhu, You-Liang; Lu, Zhong-Yuan; Laaksonen, Aatto
2018-05-21
Computer simulations provide a unique insight into the microscopic details, molecular interactions and dynamic behavior responsible for many distinct physicochemical properties of ionic liquids. Due to the sluggish and heterogeneous dynamics and the long-ranged nanostructured nature of ionic liquids, coarse-grained meso-scale simulations provide an indispensable complement to detailed first-principles calculations and atomistic simulations allowing studies over extended length and time scales with a modest computational cost. Here, we present extensive coarse-grained simulations on a series of ionic liquids of the 1-alkyl-3-methylimidazolium (alkyl = butyl, heptyl-, and decyl-) family with Cl, [BF4], and [PF6] counterions. Liquid densities, microstructures, translational diffusion coefficients, and re-orientational motion of these model ionic liquid systems have been systematically studied over a wide temperature range. The addition of neutral beads in cationic models leads to a transition of liquid morphologies from dispersed apolar beads in a polar framework to that characterized by bi-continuous sponge-like interpenetrating networks in liquid matrices. Translational diffusion coefficients of both cations and anions decrease upon lengthening of the neutral chains in the cationic models and by enlarging molecular sizes of the anionic groups. Similar features are observed in re-orientational motion and time scales of different cationic models within the studied temperature range. The comparison of the liquid properties of the ionic systems with their neutral counterparts indicates that the distinctive microstructures and dynamical quantities of the model ionic liquid systems are intrinsically related to Coulombic interactions. Finally, we compared the computational efficiencies of three linearly scaling O(N log N) Ewald summation methods, the particle-particle particle-mesh method, the particle-mesh Ewald summation method, and the Ewald summation method based on a non-uniform fast Fourier transform technique, to calculate electrostatic interactions. Coarse-grained simulations were performed using the GALAMOST and the GROMACS packages and hardware efficiently utilizing graphics processing units on a set of extended [1-decyl-3-methylimidazolium][BF4] ionic liquid systems of up to 131 072 ion pairs.
Efficient calculation of many-body induced electrostatics in molecular systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaughlin, Keith, E-mail: kmclaugh@mail.usf.edu; Cioce, Christian R.; Pham, Tony
Potential energy functions including many-body polarization are in widespread use in simulations of aqueous and biological systems, metal-organics, molecular clusters, and other systems where electronically induced redistribution of charge among local atomic sites is of importance. The polarization interactions, treated here via the methods of Thole and Applequist, while long-ranged, can be computed for moderate-sized periodic systems with extremely high accuracy by extending Ewald summation to the induced fields as demonstrated by Nymand, Sala, and others. These full Ewald polarization calculations, however, are expensive and often limited to very small systems, particularly in Monte Carlo simulations, which may require energymore » evaluation over several hundred-thousand configurations. For such situations, it shall be shown that sufficiently accurate computation of the polarization energy can be produced in a fraction of the central processing unit (CPU) time by neglecting the long-range extension to the induced fields while applying the long-range treatments of Ewald or Wolf to the static fields; these methods, denoted Ewald E-Static and Wolf E-Static (WES), respectively, provide an effective means to obtain polarization energies for intermediate and large systems including those with several thousand polarizable sites in a fraction of the CPU time. Furthermore, we shall demonstrate a means to optimize the damping for WES calculations via extrapolation from smaller trial systems.« less
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Baker, John A; Hirst, Jonathan D
2014-01-01
Traditionally, electrostatic interactions are modelled using Ewald techniques, which provide a good approximation, but are poorly suited to GPU architectures. We use the GPU versions of the LAMMPS MD package to implement and assess the Wolf summation method. We compute transport and structural properties of pure carbon dioxide and mixtures of carbon dioxide with either methane or difluoromethane. The diffusion of pure carbon dioxide is indistinguishable when using the Wolf summation method instead of PPPM on GPUs. The optimum value of the potential damping parameter, α, is 0.075. We observe a decrease in accuracy when the system polarity increases, yet the method is robust for mildly polar systems. We anticipate the method can be used for a number of techniques, and applied to a variety of systems. Substitution of PPPM can yield a two-fold decrease in the wall-clock time.
Kasahara, Kota; Ma, Benson; Goto, Kota; Dasgupta, Bhaskar; Higo, Junichi; Fukuda, Ikuo; Mashimo, Tadaaki; Akiyama, Yutaka; Nakamura, Haruki
2016-01-01
Molecular dynamics (MD) is a promising computational approach to investigate dynamical behavior of molecular systems at the atomic level. Here, we present a new MD simulation engine named "myPresto/omegagene" that is tailored for enhanced conformational sampling methods with a non-Ewald electrostatic potential scheme. Our enhanced conformational sampling methods, e.g. , the virtual-system-coupled multi-canonical MD (V-McMD) method, replace a multi-process parallelized run with multiple independent runs to avoid inter-node communication overhead. In addition, adopting the non-Ewald-based zero-multipole summation method (ZMM) makes it possible to eliminate the Fourier space calculations altogether. The combination of these state-of-the-art techniques realizes efficient and accurate calculations of the conformational ensemble at an equilibrium state. By taking these advantages, myPresto/omegagene is specialized for the single process execution with Graphics Processing Unit (GPU). We performed benchmark simulations for the 20-mer peptide, Trp-cage, with explicit solvent. One of the most thermodynamically stable conformations generated by the V-McMD simulation is very similar to an experimentally solved native conformation. Furthermore, the computation speed is four-times faster than that of our previous simulation engine, myPresto/psygene-G. The new simulator, myPresto/omegagene, is freely available at the following URLs: http://www.protein.osaka-u.ac.jp/rcsfp/pi/omegagene/ and http://presto.protein.osaka-u.ac.jp/myPresto4/.
AUTOMATIC GENERATION OF FFT FOR TRANSLATIONS OF MULTIPOLE EXPANSIONS IN SPHERICAL HARMONICS
Mirkovic, Dragan; Pettitt, B. Montgomery; Johnsson, S. Lennart
2009-01-01
The fast multipole method (FMM) is an efficient algorithm for calculating electrostatic interactions in molecular simulations and a promising alternative to Ewald summation methods. Translation of multipole expansion in spherical harmonics is the most important operation of the fast multipole method and the fast Fourier transform (FFT) acceleration of this operation is among the fastest methods of improving its performance. The technique relies on highly optimized implementation of fast Fourier transform routines for the desired expansion sizes, which need to incorporate the knowledge of symmetries and zero elements in the input arrays. Here a method is presented for automatic generation of such, highly optimized, routines. PMID:19763233
Simulations of Coulomb systems confined by polarizable surfaces using periodic Green functions.
Dos Santos, Alexandre P; Girotto, Matheus; Levin, Yan
2017-11-14
We present an efficient approach for simulating Coulomb systems confined by planar polarizable surfaces. The method is based on the solution of the Poisson equation using periodic Green functions. It is shown that the electrostatic energy arising from the surface polarization can be decoupled from the energy due to the direct Coulomb interaction between the ions. This allows us to combine an efficient Ewald summation method, or any other fast method for summing over the replicas, with the polarization contribution calculated using Green function techniques. We apply the method to calculate density profiles of ions confined between the charged dielectric and metal surfaces.
Coulomb interactions in charged fluids.
Vernizzi, Graziano; Guerrero-García, Guillermo Iván; de la Cruz, Monica Olvera
2011-07-01
The use of Ewald summation schemes for calculating long-range Coulomb interactions, originally applied to ionic crystalline solids, is a very common practice in molecular simulations of charged fluids at present. Such a choice imposes an artificial periodicity which is generally absent in the liquid state. In this paper we propose a simple analytical O(N(2)) method which is based on Gauss's law for computing exactly the Coulomb interaction between charged particles in a simulation box, when it is averaged over all possible orientations of a surrounding infinite lattice. This method mitigates the periodicity typical of crystalline systems and it is suitable for numerical studies of ionic liquids, charged molecular fluids, and colloidal systems with Monte Carlo and molecular dynamics simulations.
ms 2: A molecular simulation tool for thermodynamic properties, release 3.0
NASA Astrophysics Data System (ADS)
Rutkai, Gábor; Köster, Andreas; Guevara-Carrion, Gabriela; Janzen, Tatjana; Schappals, Michael; Glass, Colin W.; Bernreuther, Martin; Wafai, Amer; Stephan, Simon; Kohns, Maximilian; Reiser, Steffen; Deublein, Stephan; Horsch, Martin; Hasse, Hans; Vrabec, Jadran
2017-12-01
A new version release (3.0) of the molecular simulation tool ms 2 (Deublein et al., 2011; Glass et al. 2014) is presented. Version 3.0 of ms 2 features two additional ensembles, i.e. microcanonical (NVE) and isobaric-isoenthalpic (NpH), various Helmholtz energy derivatives in the NVE ensemble, thermodynamic integration as a method for calculating the chemical potential, the osmotic pressure for calculating the activity of solvents, the six Maxwell-Stefan diffusion coefficients of quaternary mixtures, statistics for sampling hydrogen bonds, smooth-particle mesh Ewald summation as well as the ability to carry out molecular dynamics runs for an arbitrary number of state points in a single program execution.
Micromagnetic simulations with periodic boundary conditions: Hard-soft nanocomposites
Wysocki, Aleksander L.; Antropov, Vladimir P.
2016-12-01
Here, we developed a micromagnetic method for modeling magnetic systems with periodic boundary conditions along an arbitrary number of dimensions. The main feature is an adaptation of the Ewald summation technique for evaluation of long-range dipolar interactions. The method was applied to investigate the hysteresis process in hard-soft magnetic nanocomposites with various geometries. The dependence of the results on different micromagnetic parameters was studied. We found that for layered structures with an out-of-plane hard phase easy axis the hysteretic properties are very sensitive to the strength of the interlayer exchange coupling, as long as the spontaneous magnetization for the hardmore » phase is significantly smaller than for the soft phase. The origin of this behavior was discussed. Additionally, we investigated the soft phase size optimizing the energy product of hard-soft nanocomposites.« less
NASA Astrophysics Data System (ADS)
Khoromskaia, Venera; Khoromskij, Boris N.
2014-12-01
Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.
Performance evaluation of the zero-multipole summation method in modern molecular dynamics software.
Sakuraba, Shun; Fukuda, Ikuo
2018-05-04
The zero-multiple summation method (ZMM) is a cutoff-based method for calculating electrostatic interactions in molecular dynamics simulations, utilizing an electrostatic neutralization principle as a physical basis. Since the accuracies of the ZMM have been revealed to be sufficient in previous studies, it is highly desirable to clarify its practical performance. In this paper, the performance of the ZMM is compared with that of the smooth particle mesh Ewald method (SPME), where the both methods are implemented in molecular dynamics software package GROMACS. Extensive performance comparisons against a highly optimized, parameter-tuned SPME implementation are performed for various-sized water systems and two protein-water systems. We analyze in detail the dependence of the performance on the potential parameters and the number of CPU cores. Even though the ZMM uses a larger cutoff distance than the SPME does, the performance of the ZMM is comparable to or better than that of the SPME. This is because the ZMM does not require a time-consuming electrostatic convolution and because the ZMM gains short neighbor-list distances due to the smooth damping feature of the pairwise potential function near the cutoff length. We found, in particular, that the ZMM with quadrupole or octupole cancellation and no damping factor is an excellent candidate for the fast calculation of electrostatic interactions. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Application of two dimensional periodic molecular dynamics to interfaces.
NASA Astrophysics Data System (ADS)
Gay, David H.; Slater, Ben; Catlow, C. Richard A.
1997-08-01
We have applied two-dimensional molecular dynamics to the surface of a crystalline aspartame and the interface between the crystal face and a solvent (water). This has allowed us to look at the dynamic processes at the surface. Understanding the surface structure and properties are important to controlling the crystal morphology. The thermodynamic ensemble was constant Number, surface Area and Temperature (NAT). The calculations have been carried out using a 2D Ewald summation and 2D periodic boundary conditions for the short range potentials. The equations of motion integration has been carried out using the standard velocity Verlet algorithm.
NASA Astrophysics Data System (ADS)
Ohta, Ayumi; Kobayashi, Osamu; Danielache, Sebastian O.; Nanbu, Shinkoh
2017-03-01
The ultra-fast photoisomerization reactions between 1,3-cyclohexadiene (CHD) and 1,3,5-cis-hexatriene (HT) in both hexane and ethanol solvents were revealed by nonadiabatic ab initio molecular dynamics (AI-MD) with a particle-mesh Ewald summation method and our Own N-layered Integrated molecular Orbital and molecular Mechanics model (PME-ONIOM) scheme. Zhu-Nakamura version trajectory surface hopping method (ZN-TSH) was employed to treat the ultra-fast nonadiabatic decaying process. The results for hexane and ethanol simulations reasonably agree with experimental data. The high nonpolar-nonpolar affinity between CHD and the solvent was observed in hexane solvent, which definitely affected the excited state lifetimes, the product branching ratio of CHD:HT, and solute (CHD) dynamics. In ethanol solvent, however, the CHD solute was isomerized in the solvent cage caused by the first solvation shell. The photochemical dynamics in ethanol solvent results in the similar property to the process appeared in vacuo (isolated CHD dynamics).
Ewald sums for Yukawa potentials in quasi-two-dimensional systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazars, Martial
2007-02-07
In this article, the author derive Ewald sums for Yukawa potential for three-dimensional systems with two-dimensional periodicity. This sums are derived from the Ewald sums for Yukawa potentials with three-dimensional periodicity [G. Salin and J.-M. Caillol, J. Chem. Phys.113, 10459 (2000)] by using the method proposed by Parry for the Coulomb interactions [D. E. Parry, Surf. Sci.49, 433 (1975); 54, 195 (1976)].
Fiore, Andrew M; Swan, James W
2018-01-28
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material.
Multilevel summation method for electrostatic force evaluation.
Hardy, David J; Wu, Zhe; Phillips, James C; Stone, John E; Skeel, Robert D; Schulten, Klaus
2015-02-10
The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation.
Wennberg, Christian L; Murtola, Teemu; Hess, Berk; Lindahl, Erik
2013-08-13
The accuracy of electrostatic interactions in molecular dynamics advanced tremendously with the introduction of particle-mesh Ewald (PME) summation almost 20 years ago. Lattice summation electrostatics is now the de facto standard for most types of biomolecular simulations, and in particular, for lipid bilayers, it has been a critical improvement due to the large charges typically present in zwitterionic lipid headgroups. In contrast, Lennard-Jones interactions have continued to be handled with increasingly longer cutoffs, partly because few alternatives have been available despite significant difficulties in tuning cutoffs and parameters to reproduce lipid properties. Here, we present a new Lennard-Jones PME implementation applied to lipid bilayers. We confirm that long-range contributions are well approximated by dispersion corrections in simple systems such as pentadecane (which makes parameters transferable), but for inhomogeneous and anisotropic systems such as lipid bilayers there are large effects on surface tension, resulting in up to 5.5% deviations in area per lipid and order parameters-far larger than many differences for which reparameterization has been attempted. We further propose an approximation for combination rules in reciprocal space that significantly reduces the computational cost of Lennard-Jones PME and makes accurate treatment of all nonbonded interactions competitive with simulations employing long cutoffs. These results could potentially have broad impact on important applications such as membrane proteins and free energy calculations.
A Graphics Processing Unit Implementation of Coulomb Interaction in Molecular Dynamics.
Jha, Prateek K; Sknepnek, Rastko; Guerrero-García, Guillermo Iván; Olvera de la Cruz, Monica
2010-10-12
We report a GPU implementation in HOOMD Blue of long-range electrostatic interactions based on the orientation-averaged Ewald sum scheme, introduced by Yakub and Ronchi (J. Chem. Phys. 2003, 119, 11556). The performance of the method is compared to an optimized CPU version of the traditional Ewald sum available in LAMMPS, in the molecular dynamics of electrolytes. Our GPU implementation is significantly faster than the CPU implementation of the Ewald method for small to a sizable number of particles (∼10(5)). Thermodynamic and structural properties of monovalent and divalent hydrated salts in the bulk are calculated for a wide range of ionic concentrations. An excellent agreement between the two methods was found at the level of electrostatic energy, heat capacity, radial distribution functions, and integrated charge of the electrolytes.
Chen, Yuntian; Zhang, Yan; Femius Koenderink, A
2017-09-04
We study semi-analytically the light emission and absorption properties of arbitrary stratified photonic structures with embedded two-dimensional magnetoelectric point scattering lattices, as used in recent plasmon-enhanced LEDs and solar cells. By employing dyadic Green's function for the layered structure in combination with the Ewald lattice summation to deal with the particle lattice, we develop an efficient method to study the coupling between planar 2D scattering lattices of plasmonic, or metamaterial point particles, coupled to layered structures. Using the 'array scanning method' we deal with localized sources. Firstly, we apply our method to light emission enhancement of dipole emitters in slab waveguides, mediated by plasmonic lattices. We benchmark the array scanning method against a reciprocity-based approach to find that the calculated radiative rate enhancement in k-space below the light cone shows excellent agreement. Secondly, we apply our method to study absorption-enhancement in thin-film solar cells mediated by periodic Ag nanoparticle arrays. Lastly, we study the emission distribution in k-space of a coupled waveguide-lattice system. In particular, we explore the dark mode excitation on the plasmonic lattice using the so-called array scanning method. Our method could be useful for simulating a broad range of complex nanophotonic structures, i.e., metasurfaces, plasmon-enhanced light emitting systems and photovoltaics.
NASA Astrophysics Data System (ADS)
Fiore, Andrew M.; Swan, James W.
2018-01-01
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)
NASA Astrophysics Data System (ADS)
Suenaga, A.; Yatsu, C.; Komeiji, Y.; Uebayasi, M.; Meguro, T.; Yamato, I.
2000-08-01
Molecular dynamics simulation of Escherichia colitrp-repressor/operator complex was performed to elucidate protein-DNA interactions in solution for 800 ps on special-purpose computer MD-GRAPE. The Ewald summation method was employed to treat the electrostatic interaction without cutoff. DNA kept stable conformation in comparison with the result of the conventional cutoff method. Thus, the trajectories obtained were used to analyze the protein-DNA interaction and to understand the role of dynamics of water molecules forming sequence specific recognition interface. The dynamical cross-correlation map showed a significant positive correlation between the helix-turn-helix DNA-binding motifs and the major grooves of operator DNA. The extensive contact surface was stable during the simulation. Most of the contacts consisted of direct interactions between phosphates of DNA and the protein, but several water-mediated polar contacts were also observed. These water-mediated interactions, which were also seen in the crystal structure (Z. Otwinowski, et al., Nature, 335 (1998) 321) emerged spontaneously from the randomized initial configuration of the solvent. This result suggests the importance of the water-mediated interaction in specific recognition of DNA by the trp-repressor, consistent with X-ray structural information.
NASA Astrophysics Data System (ADS)
Hashim, N. A.; Mudalip, S. K. Abdul; Harun, N.; Che Man, R.; Sulaiman, S. Z.; Arshad, Z. I. M.; Shaarani, S. M.
2018-05-01
Mahkota Dewa (Phaleria Macrocarpa), a good source of saponin, flavanoid, polyphenol, alkaloid, and mangiferin has an extensive range of medicinal effects. The intermolecular interactions between solute and solvents such as hydrogen bonding considered as an important factor that affect the extraction of bioactive compounds. In this work, molecular dynamics simulation was performed to elucidate the hydrogen bonding exists between Mahkota Dewa extracts and water during subcritical extraction process. A bioactive compound in the Mahkota Dewa extract, namely mangiferin was selected as a model compound. The simulation was performed at 373 K and 4.0 MPa using COMPASS force field and Ewald summation method available in Material Studio 7.0 simulation package. The radial distribution functions (RDF) between mangiferin and water signify the presence of hydrogen bonding in the extraction process. The simulation of the binary mixture of mangiferin:water shows that strong hydrogen bonding was formed. It is suggested that, the intermolecular interaction between OH2O••HMR4(OH1) has been identified to be responsible for the mangiferin extraction process.
Implementation of Magnetic Dipole Interaction in the Planewave-Basis Approach for Slab Systems
NASA Astrophysics Data System (ADS)
Oda, Tatsuki; Obata, Masao
2018-06-01
We implemented the magnetic dipole interaction (MDI) in a first-principles planewave-basis electronic structure calculation based on spin density functional theory. This implementation, employing the two-dimensional Ewald summation, enables us to obtain the total magnetic anisotropy energy of slab materials with contributions originating from both spin-orbit and magnetic dipole-dipole couplings on the same footing. The implementation was demonstrated using an iron square lattice. The result indicates that the magnetic anisotropy of the MDI is much less than that obtained from the atomic magnetic moment model due to the prolate quadrupole component of the spin magnetic moment density. We discuss the reduction in the anisotropy of the MDI in the case of modulation of the quadrupole component and the effect of magnetic field arising from the MDI on atomic scale.
NASA Astrophysics Data System (ADS)
Dumitrica, Traian; Hourahine, Ben; Aradi, Balint; Frauenheim, Thomas
We discus the coupling of the objective boundary conditions into the SCC density functional-based tight binding code DFTB+. The implementation is enabled by a generalization to the helical case of the classical Ewald method, specifically by Ewald-like formulas that do not rely on a unit cell with translational symmetry. The robustness of the method in addressing complex hetero-nuclear nano- and bio-fibrous systems is demonstrated with illustrative simulations on a helical boron nitride nanotube, a screw dislocated zinc oxide nanowire, and an ideal double-strand DNA. Work supported by NSF CMMI 1332228.
Multipolar Ewald methods, 1: theory, accuracy, and performance.
Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M
2015-02-10
The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.
Wang, Han; Nakamura, Haruki; Fukuda, Ikuo
2016-03-21
We performed extensive and strict tests for the reliability of the zero-multipole (summation) method (ZMM), which is a method for estimating the electrostatic interactions among charged particles in a classical physical system, by investigating a set of various physical quantities. This set covers a broad range of water properties, including the thermodynamic properties (pressure, excess chemical potential, constant volume/pressure heat capacity, isothermal compressibility, and thermal expansion coefficient), dielectric properties (dielectric constant and Kirkwood-G factor), dynamical properties (diffusion constant and viscosity), and the structural property (radial distribution function). We selected a bulk water system, the most important solvent, and applied the widely used TIP3P model to this test. In result, the ZMM works well for almost all cases, compared with the smooth particle mesh Ewald (SPME) method that was carefully optimized. In particular, at cut-off radius of 1.2 nm, the recommended choices of ZMM parameters for the TIP3P system are α ≤ 1 nm(-1) for the splitting parameter and l = 2 or l = 3 for the order of the multipole moment. We discussed the origin of the deviations of the ZMM and found that they are intimately related to the deviations of the equilibrated densities between the ZMM and SPME, while the magnitude of the density deviations is very small.
Monte Carlo simulation of biomolecular systems with BIOMCSIM
NASA Astrophysics Data System (ADS)
Kamberaj, H.; Helms, V.
2001-12-01
A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.
Dissipative particle dynamics: Effects of thermostating schemes on nano-colloid electrophoresis
NASA Astrophysics Data System (ADS)
Hassanzadeh Afrouzi, Hamid; Moshfegh, Abouzar; Farhadi, Mousa; Sedighi, Kurosh
2018-05-01
A novel fully explicit approach using dissipative particle dynamics (DPD) method is introduced in the present study to model the electrophoretic transport of nano-colloids in an electrolyte solution. Slater type charge smearing function included in 3D Ewald summation method is employed to treat electrostatic interaction. Performance of various thermostats are challenged to control the system temperature and study the dynamic response of colloidal electrophoretic mobility under practical ranges of external electric field (0 . 072 < E < 0 . 361 v/nm) covering linear to non-linear response regime, and ionic salt concentration (0.049 < SC < 0 . 69 [M]) covering weak to strong Debye screening of the colloid. System temperature and electrophoretic mobility both show a direct and inverse relationships respectively with electric field and colloidal repulsion; although they each respectively behave direct and inverse trends with salt concentration under various thermostats. Nosé-Hoover-Lowe-Andersen and Lowe-Andersen thermostats are found to function more effectively under high electric fields (E > 0 . 145[v/nm ]) while thermal equilibrium is maintained. Reasonable agreements are achieved by benchmarking the system radial distribution function with available EW3D modellings, as well as comparing reduced mobility against conventional Smoluchowski and Hückel theories, and numerical solution of Poisson-Boltzmann equation.
The local density of optical states of a metasurface
NASA Astrophysics Data System (ADS)
Lunnemann, Per; Koenderink, A. Femius
2016-02-01
While metamaterials are often desirable for near-field functions, such as perfect lensing, or cloaking, they are often quantified by their response to plane waves from the far field. Here, we present a theoretical analysis of the local density of states near lattices of discrete magnetic scatterers, i.e., the response to near field excitation by a point source. Based on a pointdipole theory using Ewald summation and an array scanning method, we can swiftly and semi-analytically evaluate the local density of states (LDOS) for magnetoelectric point sources in front of an infinite two-dimensional (2D) lattice composed of arbitrary magnetoelectric dipole scatterers. The method takes into account radiation damping as well as all retarded electrodynamic interactions in a self-consistent manner. We show that a lattice of magnetic scatterers evidences characteristic Drexhage oscillations. However, the oscillations are phase shifted relative to the electrically scattering lattice consistent with the difference expected for reflection off homogeneous magnetic respectively electric mirrors. Furthermore, we identify in which source-surface separation regimes the metasurface may be treated as a homogeneous interface, and in which homogenization fails. A strong frequency and in-plane position dependence of the LDOS close to the lattice reveals coupling to guided modes supported by the lattice.
Staggered Mesh Ewald: An extension of the Smooth Particle-Mesh Ewald method adding great versatility
Cerutti, David S.; Duke, Robert E.; Darden, Thomas A.; Lybrand, Terry P.
2009-01-01
We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5× larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations. PMID:20174456
Golebiowski, Jérôme; Antonczak, Serge; Fernandez-Carmona, Juan; Condom, Roger; Cabrol-Bass, Daniel
2004-12-01
Nanosecond molecular dynamics using the Ewald summation method have been performed to elucidate the structural and energetic role of the closing base pair in loop-loop RNA duplexes neutralized by Mg2+ counterions in aqueous phases. Mismatches GA, CU and Watson-Crick GC base pairs have been considered for closing the loop of an RNA in complementary interaction with HIV-1 TAR. The simulations reveal that the mismatch GA base, mediated by a water molecule, leads to a complex that presents the best compromise between flexibility and energetic contributions. The mismatch CU base pair, in spite of the presence of an inserted water molecule, is too short to achieve a tight interaction at the closing-loop junction and seems to force TAR to reorganize upon binding. An energetic analysis has allowed us to quantify the strength of the interactions of the closing and the loop-loop pairs throughout the simulations. Although the water-mediated GA closing base pair presents an interaction energy similar to that found on fully geometry-optimized structure, the water-mediated CU closing base pair energy interaction reaches less than half the optimal value.
Isele-Holder, Rolf E; Mitchell, Wayne; Ismail, Ahmed E
2012-11-07
For inhomogeneous systems with interfaces, the inclusion of long-range dispersion interactions is necessary to achieve consistency between molecular simulation calculations and experimental results. For accurate and efficient incorporation of these contributions, we have implemented a particle-particle particle-mesh Ewald solver for dispersion (r(-6)) interactions into the LAMMPS molecular dynamics package. We demonstrate that the solver's O(N log N) scaling behavior allows its application to large-scale simulations. We carefully determine a set of parameters for the solver that provides accurate results and efficient computation. We perform a series of simulations with Lennard-Jones particles, SPC/E water, and hexane to show that with our choice of parameters the dependence of physical results on the chosen cutoff radius is removed. Physical results and computation time of these simulations are compared to results obtained using either a plain cutoff or a traditional Ewald sum for dispersion.
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. © 2015 Wiley Periodicals, Inc.
Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki
2014-05-21
In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ∼ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.
NASA Astrophysics Data System (ADS)
Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki
2014-05-01
In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ˜ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother behaviors with respect to cutoff length were obtained. These features can be explained, on the basis of the theoretical error analyses, such that the excess energy accuracy is improved with increasing l and that the total accuracy improvement within l ⩽ L is facilitated by a small damping parameter. Although the accuracy was fundamentally similar to the ion system, the bulk water system exhibited distinguishable quantitative behaviors. A smaller damping parameter was effective in all the practical cutoff distance, and this fact can be interpreted by the reduction of the excess subset. A lower moment was advantageous in the energy accuracy, where l = 1 was slightly superior to l = 2 in this system. However, the method with l = 2 (viz., the zero-quadrupole sum) gave accurate results for the radial distribution function. We confirmed the stability in the numerical integration for MD simulations employing the ZM scheme. This result is supported by the sufficient smoothness of the energy function. Along with the smoothness, the pairwise feature and the allowance of the atom-based cutoff mode on the energy formula lead to the exact zero total-force, ensuring the total-momentum conservations for typical MD equations of motion.
Nano-colloid electrophoretic transport: Fully explicit modelling via dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Hassanzadeh Afrouzi, Hamid; Farhadi, Mousa; Sedighi, Kurosh; Moshfegh, Abouzar
2018-02-01
In present study, a novel fully explicit approach using dissipative particle dynamics (DPD) method is introduced for modelling electrophoretic transport of nano-colloids in an electrolyte solution. Slater type charge smearing function included in 3D Ewald summation method is employed to treat electrostatic interaction. Moreover, capability of different thermostats are challenged to control the system temperature and study the dynamic response of colloidal electrophoretic mobility under practical ranges of external electric field in nano scale application (0.072 < E < 0.361 v / nm) covering non-linear response regime, and ionic salt concentration (0.049 < SC < 0.69 [M]) covering weak to strong Debye screening of the colloid. The effect of different colloidal repulsions are then studied on temperature, reduced mobility and zeta potential which is computed based on charge distribution within the spherical colloidal EDL. System temperature and electrophoretic mobility both show a direct and inverse relationship respectively with electric field and colloidal repulsion. Mobility declining with colloidal repulsion reaches a plateau which is a relatively constant value at each electrolyte salinity for Aii > 600 in DPD units regardless of electric field intensity. Nosé-Hoover-Lowe-Andersen and Lowe-Andersen thermostats are found to function more effectively under high electric fields (E > 0.145 [ v / nm ]) while thermal equilibrium is maintained. Reasonable agreements are achieved by benchmarking the radial distribution function with available electrolyte structure modellings, as well as comparing reduced mobility against conventional Smoluchowski and Hückel theories, and numerical solution of Poisson-Boltzmann equation.
Das, Susanta; Nam, Kwangho; Major, Dan Thomas
2018-03-13
In recent years, a number of quantum mechanical-molecular mechanical (QM/MM) enzyme studies have investigated the dependence of reaction energetics on the size of the QM region using energy and free energy calculations. In this study, we revisit the question of QM region size dependence in QM/MM simulations within the context of energy and free energy calculations using a proton transfer in a DNA base pair as a test case. In the simulations, the QM region was treated with a dispersion-corrected AM1/d-PhoT Hamiltonian, which was developed to accurately describe phosphoryl and proton transfer reactions, in conjunction with an electrostatic embedding scheme using the particle-mesh Ewald summation method. With this rigorous QM/MM potential, we performed rather extensive QM/MM sampling, and found that the free energy reaction profiles converge rapidly with respect to the QM region size within ca. ±1 kcal/mol. This finding suggests that the strategy of QM/MM simulations with reasonably sized and selected QM regions, which has been employed for over four decades, is a valid approach for modeling complex biomolecular systems. We point to possible causes for the sensitivity of the energy and free energy calculations to the size of the QM region, and potential implications.
Angular resolution and range of dipole-dipole correlations in water
NASA Astrophysics Data System (ADS)
Mathias, Gerald; Tavan, Paul
2004-03-01
We investigate the dipolar correlations in liquid water at angular resolution by molecular-dynamics simulations of a large periodic simulation system containing about 40 000 molecules. Because we are particularly interested in the long-range ordering, we use a simple three-point model for these molecules. The electrostatics is treated both by Ewald summation and by minimum image truncation combined with a reaction field approach. To gain insight into the angular dependence of the simulated dipolar ordering we introduce a suitable expansion of the molecular pair distribution function into a set of two-dimensional correlation functions. We show that these functions enable detailed insights into the shell structure of the dipolar ordering around a given water molecule. For these functions we derive analytical expressions in the particular case in which liquid water is conceived as a dielectric continuum. Comparisons of these continuum models with the correlation functions derived from the simulations yield the key result that liquid water behaves like a continuum dielectric beyond distances of about 15 Å from a given water molecule. We argue that this should be a generic property of water independent of our modeling. By comparison of the results of the two different electrostatics treatments with the continuum description we show that the boundary artifacts occurring in both methods are isotropically distributed and are locally small in the respective boundary regions.
Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.
Khoromskaia, Venera; Khoromskij, Boris N
2015-12-21
We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.
NASA Astrophysics Data System (ADS)
Guerrero-García, Guillermo Iván; González-Mozuelos, Pedro; de la Cruz, Mónica Olvera
2011-10-01
In a previous theoretical and simulation study [G. I. Guerrero-García, E. González-Tovar, and M. Olvera de la Cruz, Soft Matter 6, 2056 (2010)], it has been shown that an asymmetric charge neutralization and electrostatic screening depending on the charge polarity of a single nanoparticle occurs in the presence of a size-asymmetric monovalent electrolyte. This effect should also impact the effective potential between two macroions suspended in such a solution. Thus, in this work we study the mean force and the potential of mean force between two identical charged nanoparticles immersed in a size-asymmetric monovalent electrolyte, showing that these results go beyond the standard description provided by the well-known Derjaguin-Landau-Verwey-Overbeek theory. To include consistently the ion-size effects, molecular dynamics (MD) simulations and liquid theory calculations are performed at the McMillan-Mayer level of description in which the solvent is taken into account implicitly as a background continuum with the suitable dielectric constant. Long-range electrostatic interactions are handled properly in the simulations via the well established Ewald sums method and the pre-averaged Ewald sums approach, originally proposed for homogeneous ionic fluids. An asymmetric behavior with respect to the colloidal charge polarity is found for the effective interactions between two identical nanoparticles. In particular, short-range attractions are observed between two equally charged nanoparticles, even though our model does not include specific interactions; these attractions are greatly enhanced for anionic nanoparticles immersed in standard electrolytes where cations are smaller than anions. Practical implications of some of the presented results are also briefly discussed. A good accord between the standard Ewald method and the pre-averaged Ewald approach is attained, despite the fact that the ionic system studied here is certainly inhomogeneous. In general, good agreement between the liquid theory approach and MD simulations is also found.
A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method
NASA Astrophysics Data System (ADS)
Bush, I. J.; Todorov, I. T.; Smith, W.
2006-09-01
The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.
Elucidations on the Reciprocal Lattice and the Ewald Sphere
ERIC Educational Resources Information Center
Foadi, J.; Evans, G.
2008-01-01
The reciprocal lattice is derived through the Fourier transform of a generic crystal lattice, as done previously in the literature. A few key derivations are this time handled in detail, and the connection with x-ray diffraction is clearly pointed out. The Ewald sphere is subsequently thoroughly explained and a few comments on its representation…
Cutoff size need not strongly influence molecular dynamics results for solvated polypeptides.
Beck, David A C; Armen, Roger S; Daggett, Valerie
2005-01-18
The correct treatment of van der Waals and electrostatic nonbonded interactions in molecular force fields is essential for performing realistic molecular dynamics (MD) simulations of solvated polypeptides. The most computationally tractable treatment of nonbonded interactions in MD utilizes a spherical distance cutoff (typically, 8-12 A) to reduce the number of pairwise interactions. In this work, we assess three spherical atom-based cutoff approaches for use with all-atom explicit solvent MD: abrupt truncation, a CHARMM-style electrostatic shift truncation, and our own force-shifted truncation. The chosen system for this study is an end-capped 17-residue alanine-based alpha-helical peptide, selected because of its use in previous computational and experimental studies. We compare the time-averaged helical content calculated from these MD trajectories with experiment. We also examine the effect of varying the cutoff treatment and distance on energy conservation. We find that the abrupt truncation approach is pathological in its inability to conserve energy. The CHARMM-style shift truncation performs quite well but suffers from energetic instability. On the other hand, the force-shifted spherical cutoff method conserves energy, correctly predicts the experimental helical content, and shows convergence in simulation statistics as the cutoff is increased. This work demonstrates that by using proper and rigorous techniques, it is possible to correctly model polypeptide dynamics in solution with a spherical cutoff. The inherent computational advantage of spherical cutoffs over Ewald summation (and related) techniques is essential in accessing longer MD time scales.
NASA Astrophysics Data System (ADS)
Needham, Perri J.; Bhuiyan, Ashraf; Walker, Ross C.
2016-04-01
We present an implementation of explicit solvent particle mesh Ewald (PME) classical molecular dynamics (MD) within the PMEMD molecular dynamics engine, that forms part of the AMBER v14 MD software package, that makes use of Intel Xeon Phi coprocessors by offloading portions of the PME direct summation and neighbor list build to the coprocessor. We refer to this implementation as pmemd MIC offload and in this paper present the technical details of the algorithm, including basic models for MPI and OpenMP configuration, and analyze the resultant performance. The algorithm provides the best performance improvement for large systems (>400,000 atoms), achieving a ∼35% performance improvement for satellite tobacco mosaic virus (1,067,095 atoms) when 2 Intel E5-2697 v2 processors (2 ×12 cores, 30M cache, 2.7 GHz) are coupled to an Intel Xeon Phi coprocessor (Model 7120P-1.238/1.333 GHz, 61 cores). The implementation utilizes a two-fold decomposition strategy: spatial decomposition using an MPI library and thread-based decomposition using OpenMP. We also present compiler optimization settings that improve the performance on Intel Xeon processors, while retaining simulation accuracy.
Large-scale Synchronization in Carpets of Micro-rotors
NASA Astrophysics Data System (ADS)
Kanale, Anup; Guo, Hanliang; Yan, Wen; Kanso, Eva
2017-11-01
Motile cilia are ubiquitous in nature, and have a critical role in biological locomotion and fluid transport. They often beat in an orchestrated wavelike fashion, and theoretical evidence suggests that this coordinated motion could arise from hydrodynamic interactions. Models based on bead-spring oscillators were used to examine the interaction between pairs of cilia, focusing on in-phase or anti-phase synchrony, while models of hydrodynamically-coupled elastic filaments looked at metachronal coordination in large but finite numbers of interacting cilia. The latter models reproduce metachronal wave coordination, but they are not readily amenable to analysis and parametric studies that highlight the origin of the instabilities that lead to wave propagations and wavelength selection. Here, we use a known model in which each cilium is represented by a rigid sphere moving along a circular trajectory close to a wall, hence the term rotor. The rotor is driven by a cilia-inspired force profile. We generalize this model to a doubly-periodic array of rotors, assuming small distance to the bounding wall, and employ Ewald summation techniques to solve for the flow field. Our goal is to examine the conditions that give rise to stable metachronal waves and their associated wavelength.
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.; ...
2017-07-26
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for models of electrolyte solution. In this paper, we provide definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation and isolate the effects of charge and cavitation, comparing tomore » the Born (linear response) model. We show that using uncorrected Ewald summation leads to unphysical values for the single ion solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. Finally, this suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for models of electrolyte solution. In this paper, we provide definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation and isolate the effects of charge and cavitation, comparing tomore » the Born (linear response) model. We show that using uncorrected Ewald summation leads to unphysical values for the single ion solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. Finally, this suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation.« less
Golebiowski, Jérôme; Antonczak, Serge; Di-Giorgio, Audrey; Condom, Roger; Cabrol-Bass, Daniel
2004-02-01
The dynamic behavior of the HCV IRES IIId domain is analyzed by means of a 2.6-ns molecular dynamics simulation, starting from an NMR structure. The simulation is carried out in explicit water with Na+ counterions, and particle-mesh Ewald summation is used for the electrostatic interactions. In this work, we analyze selected patterns of the helix that are crucial for IRES activity and that could be considered as targets for the intervention of inhibitors, such as the hexanucleotide terminal loop (more particularly its three consecutive guanines) and the loop-E motif. The simulation has allowed us to analyze the dynamics of the loop substructure and has revealed a behavior among the guanine bases that might explain the different role of the third guanine of the GGG triplet upon molecular recognition. The accessibility of the loop-E motif and the loop major and minor groove is also examined, as well as the effect of Na+ or Mg2+ counterion within the simulation. The electrostatic analysis reveals several ion pockets, not discussed in the experimental structure. The positions of these ions are useful for locating specific electrostatic recognition sites for potential inhibitor binding.
NASA Astrophysics Data System (ADS)
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.; Mundy, Chistopher J.
2017-10-01
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for models of electrolyte solution. Here, we provide definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation and isolate the effects of charge and cavitation, comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to unphysical values for the single ion solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation.
Takae, Kyohei; Onuki, Akira
2013-09-28
We develop an efficient Ewald method of molecular dynamics simulation for calculating the electrostatic interactions among charged and polar particles between parallel metallic plates, where we may apply an electric field with an arbitrary size. We use the fact that the potential from the surface charges is equivalent to the sum of those from image charges and dipoles located outside the cell. We present simulation results on boundary effects of charged and polar fluids, formation of ionic crystals, and formation of dipole chains, where the applied field and the image interaction are crucial. For polar fluids, we find a large deviation of the classical Lorentz-field relation between the local field and the applied field due to pair correlations along the applied field. As general aspects, we clarify the difference between the potential-fixed and the charge-fixed boundary conditions and examine the relationship between the discrete particle description and the continuum electrostatics.
Literacy and Justice through Photography: A Classroom Guide. Language & Literacy Series
ERIC Educational Resources Information Center
Ewald, Wendy; Hyde, Katherine; Lord, Lisa
2011-01-01
This practical guide will help teachers to use the acclaimed "Literacy Through Photography" method developed by Wendy Ewald to promote critical thinking, self-expression, and respect in the classroom. The authors share their perspectives as an artist, a sociologist, and a teacher to show educators how to integrate four new "Literacy Through…
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
GRAPE-5: A Special-Purpose Computer for N-Body Simulations
NASA Astrophysics Data System (ADS)
Kawai, Atsushi; Fukushige, Toshiyuki; Makino, Junichiro; Taiji, Makoto
2000-08-01
We have developed a special-purpose computer for gravitational many-body simulations, GRAPE-5. GRAPE-5 accelerates the force calculation which dominates the calculation cost of the simulation. All other calculations, such as the time integration of orbits, are performed on a general-purpose computer (host computer) connected to GRAPE-5. A GRAPE-5 board consists of eight custom pipeline chips (G5 chip) and its peak performance is 38.4 Gflops. GRAPE-5 is the successor of GRAPE-3. The differences between GRAPE-5 and GRAPE-3 are: (1) The newly developed G5 chip contains two pipelines operating at 80 MHz, while the GRAPE chip, which was used for GRAPE-3, had one at 20 MHz. The calculation speed of GRAPE-5 is 8-times faster than that of GRAPE-3. (2) The GRAPE-5 board adopted a PCI bus as the interface to the host computer instead of VME of GRAPE-3, resulting in a communication speed one order of magnitude faster. (3) In addition to the pure 1/r potential, the G5 chip can calculate forces with arbitrary cutoff functions, so that it can be applied to the Ewald or P3M methods. (4) The pairwise force calculated on GRAPE-5 is about 10-times more accurate than that on GRAPE-3. On one GRAPE-5 board, one timestep with a direct summation algorithm takes 14 (N/128 k)2 seconds. With the Barnes-Hut tree algorithm (theta = 0.75), one timestep can be done in 15 (N/106) seconds.
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Taiji, Makoto; Makino, Junichiro; Ebisuzaki, Toshikazu; Sugimoto, Daiichiro
1996-09-01
We have developed a parallel, pipelined special-purpose computer for N-body simulations, MD-GRAPE (for "GRAvity PipE"). In gravitational N- body simulations, almost all computing time is spent on the calculation of interactions between particles. GRAPE is specialized hardware to calculate these interactions. It is used with a general-purpose front-end computer that performs all calculations other than the force calculation. MD-GRAPE is the first parallel GRAPE that can calculate an arbitrary central force. A force different from a pure 1/r potential is necessary for N-body simulations with periodic boundary conditions using the Ewald or particle-particle/particle-mesh (P^3^M) method. MD-GRAPE accelerates the calculation of particle-particle force for these algorithms. An MD- GRAPE board has four MD chips and its peak performance is 4.2 GFLOPS. On an MD-GRAPE board, a cosmological N-body simulation takes 6O0(N/10^6^)^3/2^ s per step for the Ewald method, where N is the number of particles, and would take 24O(N/10^6^) s per step for the P^3^M method, in a uniform distribution of particles.
MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system
Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron
2011-01-01
We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867
Rapid sampling of stochastic displacements in Brownian dynamics simulations
NASA Astrophysics Data System (ADS)
Fiore, Andrew M.; Balboa Usabiaga, Florencio; Donev, Aleksandar; Swan, James W.
2017-03-01
We present a new method for sampling stochastic displacements in Brownian Dynamics (BD) simulations of colloidal scale particles. The method relies on a new formulation for Ewald summation of the Rotne-Prager-Yamakawa (RPY) tensor, which guarantees that the real-space and wave-space contributions to the tensor are independently symmetric and positive-definite for all possible particle configurations. Brownian displacements are drawn from a superposition of two independent samples: a wave-space (far-field or long-ranged) contribution, computed using techniques from fluctuating hydrodynamics and non-uniform fast Fourier transforms; and a real-space (near-field or short-ranged) correction, computed using a Krylov subspace method. The combined computational complexity of drawing these two independent samples scales linearly with the number of particles. The proposed method circumvents the super-linear scaling exhibited by all known iterative sampling methods applied directly to the RPY tensor that results from the power law growth of the condition number of tensor with the number of particles. For geometrically dense microstructures (fractal dimension equal three), the performance is independent of volume fraction, while for tenuous microstructures (fractal dimension less than three), such as gels and polymer solutions, the performance improves with decreasing volume fraction. This is in stark contrast with other related linear-scaling methods such as the force coupling method and the fluctuating immersed boundary method, for which performance degrades with decreasing volume fraction. Calculations for hard sphere dispersions and colloidal gels are illustrated and used to explore the role of microstructure on performance of the algorithm. In practice, the logarithmic part of the predicted scaling is not observed and the algorithm scales linearly for up to 4 ×106 particles, obtaining speed ups of over an order of magnitude over existing iterative methods, and making the cost of computing Brownian displacements comparable to the cost of computing deterministic displacements in BD simulations. A high-performance implementation employing non-uniform fast Fourier transforms implemented on graphics processing units and integrated with the software package HOOMD-blue is used for benchmarking.
A review and reassessment of diffraction, scattering, and shadows in electrodynamics
NASA Astrophysics Data System (ADS)
Berg, Matthew J.; Sorensen, Christopher M.
2018-05-01
The concepts of diffraction and scattering are well known and considered fundamental in optics and other wave phenomena. For any type of wave, one way to define diffraction is the spreading of waves, i.e., no change in the average propagation direction, while scattering is the deflection of waves with a clear change of propagation direction. However, the terms "diffraction" and "scattering" are often used interchangeably, and hence, a clear distinction between the two is difficult to find. This review considers electromagnetic waves and retains the simple definition that diffraction is the spreading of waves but demonstrates that all diffraction patterns are the result of scattering. It is shown that for electromagnetic waves, the "diffracted" wave from an object is the Ewald-Oseen extinction wave in the far-field zone. The intensity distribution of this wave yields what is commonly called the diffraction pattern. Moreover, this is the same Ewald-Oseen wave that cancels the incident wave inside the object and thereafter continues to do so immediately behind the object to create a shadow. If the object is much wider than the beam but has a hole, e.g., a screen with an aperture, the Ewald-Oseen extinction wave creates the shadow behind the screen and the incident light that passes through the aperture creates the diffraction pattern. This point of view also illustrates Babinet's principle. Thus, it is the Ewald-Oseen extinction theorem that binds together diffraction, scattering, and shadows.
NASA Astrophysics Data System (ADS)
Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.
2004-01-01
The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is hoped that this new formalism will facilitate the systematic implementation of higher order multipoles in classical biomolecular force fields.
Liquid-liquid transition in ST2 water
NASA Astrophysics Data System (ADS)
Liu, Yang; Palmer, Jeremy C.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.
2012-12-01
We use the weighted histogram analysis method [S. Kumar, D. Bouzida, R. H. Swendsen, P. A. Kollman, and J. M. Rosenberg, J. Comput. Chem. 13, 1011 (1992), 10.1002/jcc.540130812] to calculate the free energy surface of the ST2 model of water as a function of density and bond-orientational order. We perform our calculations at deeply supercooled conditions (T = 228.6 K, P = 2.2 kbar; T = 235 K, P = 2.2 kbar) and focus our attention on the region of bond-orientational order that is relevant to disordered phases. We find a first-order transition between a low-density liquid (LDL, ρ ≈ 0.9 g/cc) and a high-density liquid (HDL, ρ ≈ 1.15 g/cc), confirming our earlier sampling of the free energy surface of this model as a function of density [Y. Liu, A. Z. Panagiotopoulos, and P. G. Debenedetti, J. Chem. Phys. 131, 104508 (2009), 10.1063/1.3229892]. We demonstrate the disappearance of the LDL basin at high pressure and of the HDL basin at low pressure, in agreement with independent simulations of the system's equation of state. Consistency between directly computed and reweighted free energies, as well as between free energy surfaces computed using different thermodynamic starting conditions, confirms proper equilibrium sampling. Diffusion and structural relaxation calculations demonstrate that equilibration of the LDL phase, which exhibits slow dynamics, is attained in the course of the simulations. Repeated flipping between the LDL and HDL phases in the course of long molecular dynamics runs provides further evidence of a phase transition. We use the Ewald summation with vacuum boundary conditions to calculate long-ranged Coulombic interactions and show that conducting boundary conditions lead to unphysical behavior at low temperatures.
Summation rules for a fully nonlocal energy-based quasicontinuum method
NASA Astrophysics Data System (ADS)
Amelang, J. S.; Venturini, G. N.; Kochmann, D. M.
2015-09-01
The quasicontinuum (QC) method coarse-grains crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. A crucial cornerstone of all QC techniques, summation or quadrature rules efficiently approximate the thermodynamic quantities of interest. Here, we investigate summation rules for a fully nonlocal, energy-based QC method to approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of all atoms in the crystal lattice. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. We review traditional summation rules and discuss their strengths and weaknesses with a focus on energy approximation errors and spurious force artifacts. Moreover, we introduce summation rules which produce no residual or spurious force artifacts in centrosymmetric crystals in the large-element limit under arbitrary affine deformations in two dimensions (and marginal force artifacts in three dimensions), while allowing us to seamlessly bridge to full atomistics. Through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions, we compare the accuracy of the new scheme to various previous ones. Our results confirm that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors. Our numerical benchmark examples include the calculation of elastic constants from completely random QC meshes and the inhomogeneous deformation of aggressively coarse-grained crystals containing nano-voids. In the elastic regime, we directly compare QC results to those of full atomistics to assess global and local errors in complex QC simulations. Going beyond elasticity, we illustrate the performance of the energy-based QC method with the new second-order summation rule by the help of nanoindentation examples with automatic mesh adaptation. Overall, our findings provide guidelines for the selection of summation rules for the fully nonlocal energy-based QC method.
NASA Astrophysics Data System (ADS)
Pullara, Filippo; Ignacio, J., General
2015-10-01
Standard Molecular Dynamics simulations (MD) are usually performed under periodic boundary conditions using the well-established "Ewald summation". This implies that the distance among each element in a given lattice cell and its corresponding element in another cell, as well as their relative orientations, are constant. Consequently, protein-protein interactions between proteins in different cells—important in many biological activities, such as protein cooperativity and physiological/pathological aggregation—are severely restricted, and features driven by protein-protein interactions are lost. The consequences of these restrictions, although conceptually understood and mentioned in the literature, have not been quantitatively studied before. The effect of protein-protein interactions on the free energy landscape of a model system, dialanine, is presented. This simple system features a free energy diagram with well-separated minima. It is found that, in the case of absence of peptide-peptide (p-p) interactions, the ψ = 150° dihedral angle determines the most energetically favored conformation (global free-energy minimum). When strong p-p interactions are induced, the global minimum switches to the ψ = 0° conformation. This shows that the free-energy landscape of an individual molecule is dramatically affected by the presence of other freely interacting molecules of its same type. Results of the study suggest how taking into account p-p interactions in MD allows having a more realistic picture of system activity and functional conformations.
Mismatched summation mechanisms in older adults for the perception of small moving stimuli.
McDougall, Thomas J; Nguyen, Bao N; McKendrick, Allison M; Badcock, David R
2018-01-01
Previous studies have found evidence for reduced cortical inhibition in aging visual cortex. Reduced inhibition could plausibly increase the spatial area of excitation in receptive fields of older observers, as weaker inhibitory processes would allow the excitatory receptive field to dominate and be psychophysically measureable over larger areas. Here, we investigated aging effects on spatial summation of motion direction using the Battenberg summation method, which aims to control the influence of locally generated internal noise changes by holding overall display size constant. This method produces more accurate estimates of summation area than conventional methods that simply increase overall stimulus dimensions. Battenberg stimuli have a checkerboard arrangement, where check size (luminance-modulated drifting gratings alternating with mean luminance areas), but not display size, is varied and compared with performance for a full field stimulus to provide a measure of summation. Motion direction discrimination thresholds, where contrast was the dependent variable, were measured in 14 younger (24-34 years) and 14 older (62-76 years) adults. Older observers were less sensitive for all check sizes, but the relative sensitivity across sizes, also differed between groups. In the older adults, the full field stimulus offered smaller performance improvements compared to that for younger adults, specifically for the small checked Battenberg stimuli. This suggests aging impacts on short-range summation mechanisms, potentially underpinned by larger summation areas for the perception of small moving stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
Age effects on pain thresholds, temporal summation and spatial summation of heat and pressure pain.
Lautenbacher, Stefan; Kunz, Miriam; Strate, Peter; Nielsen, Jesper; Arendt-Nielsen, Lars
2005-06-01
Experimental data on age-related changes in pain perception have so far been contradictory. It has appeared that the type of pain induction method is critical in this context, with sensitivity to heat pain being decreased whereas sensitivity to pressure pain may be even enhanced in the elderly. Furthermore, it has been shown that temporal summation of heat pain is more pronounced in the elderly but it has remained unclear whether age differences in temporal summation are also evident when using other pain induction methods. No studies on age-related changes in spatial summation of pain have so far been conducted. The aim of the present study was to provide a comprehensive survey on age-related changes in pain perception, i.e. in somatosensory thresholds (warmth, cold, vibration), pain thresholds (heat, pressure) and spatial and temporal summation of heat and pressure pain. We investigated 20 young (mean age 27.1 years) and 20 elderly (mean age 71.6 years) subjects. Our results confirmed and extended previous findings by showing that somatosensory thresholds for non-noxious stimuli increase with age whereas pressure pain thresholds decrease and heat pain thresholds show no age-related changes. Apart from an enhanced temporal summation of heat pain, pain summation was not found to be critically affected by age. The results of the present study provide evidence for stimulus-specific changes in pain perception in the elderly, with deep tissue (muscle) nociception being affected differently by age than superficial tissue (skin) nociception. Summation mechanisms contribute only moderately to age changes in pain perception.
A new multigrid formulation for high order finite difference methods on summation-by-parts form
NASA Astrophysics Data System (ADS)
Ruggiu, Andrea A.; Weinerfelt, Per; Nordström, Jan
2018-04-01
Multigrid schemes for high order finite difference methods on summation-by-parts form are studied by comparing the effect of different interpolation operators. By using the standard linear prolongation and restriction operators, the Galerkin condition leads to inaccurate coarse grid discretizations. In this paper, an alternative class of interpolation operators that bypass this issue and preserve the summation-by-parts property on each grid level is considered. Clear improvements of the convergence rate for relevant model problems are achieved.
On computing closed forms for summations. [polynomials and rational functions
NASA Technical Reports Server (NTRS)
Moenck, R.
1977-01-01
The problem of finding closed forms for a summation involving polynomials and rational functions is considered. A method closely related to Hermite's method for integration of rational functions derived. The method expresses the sum of a rational function as a rational function part and a transcendental part involving derivatives of the gamma function.
Secure Multiparty Quantum Computation for Summation and Multiplication.
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-21
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.
Secure Multiparty Quantum Computation for Summation and Multiplication
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197
A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering
Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani
2012-01-01
Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386
Computer simulations of equilibrium magnetization and microstructure in magnetic fluids
NASA Astrophysics Data System (ADS)
Rosa, A. P.; Abade, G. C.; Cunha, F. R.
2017-09-01
In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole-dipole interactions, the standard method of minimum image is both accurate and computationally efficient. Otherwise, lattice sums of magnetic particle interactions are required to accelerate convergence of the equilibrium magnetization. The accuracy of the numerical code is also quantitatively verified by comparing the magnetization obtained from numerical results with asymptotic predictions of high order in the particle volume fraction, in the presence of dipole-dipole interactions. In addition, Brownian Dynamics simulations are used in order to examine magnetization relaxation of a ferrofluid and to calculate the magnetic relaxation time as a function of the magnetic particle interaction strength for a given particle volume fraction and a non-dimensional applied field. The simulations of magnetization relaxation have shown the existence of a critical value of the dipole-dipole interaction parameter. For strength of the interactions below the critical value at a given particle volume fraction, the magnetic relaxation time is close to the Brownian relaxation time and the suspension has no appreciable memory. On the other hand, for strength of dipole interactions beyond its critical value, the relaxation time increases exponentially with the strength of dipole-dipole interaction. Although we have considered equilibrium conditions, the obtained results have far-reaching implications for the analysis of magnetic suspensions under external flow.
NASA Astrophysics Data System (ADS)
Olsson, Martin A.; García-Sosa, Alfonso T.; Ryde, Ulf
2018-01-01
We have studied the binding of 102 ligands to the farnesoid X receptor within the D3R Grand Challenge 2016 blind-prediction competition. First, we employed docking with five different docking software and scoring functions. The selected docked poses gave an average root-mean-squared deviation of 4.2 Å. Consensus scoring gave decent results with a Kendall's τ of 0.26 ± 0.06 and a Spearman's ρ of 0.41 ± 0.08. For a subset of 33 ligands, we calculated relative binding free energies with free-energy perturbation. Five transformations between the ligands involved a change of the net charge and we implemented and benchmarked a semi-analytic correction (Rocklin et al., J Chem Phys 139:184103, 2013) for artifacts caused by the periodic boundary conditions and Ewald summation. The results gave a mean absolute deviation of 7.5 kJ/mol compared to the experimental estimates and a correlation coefficient of R 2 = 0.1. These results were among the four best in this competition out of 22 submissions. The charge corrections were significant (7-8 kJ/mol) and always improved the results. By employing 23 intermediate states in the free-energy perturbation, there was a proper overlap between all states and the precision was 0.1-0.7 kJ/mol. However, thermodynamic cycles indicate that the sampling was insufficient in some of the perturbations.
NASA Astrophysics Data System (ADS)
Amelang, Jeff
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
A GPU-Accelerated Parameter Interpolation Thermodynamic Integration Free Energy Method.
Giese, Timothy J; York, Darrin M
2018-03-13
There has been a resurgence of interest in free energy methods motivated by the performance enhancements offered by molecular dynamics (MD) software written for specialized hardware, such as graphics processing units (GPUs). In this work, we exploit the properties of a parameter-interpolated thermodynamic integration (PI-TI) method to connect states by their molecular mechanical (MM) parameter values. This pathway is shown to be better behaved for Mg 2+ → Ca 2+ transformations than traditional linear alchemical pathways (with and without soft-core potentials). The PI-TI method has the practical advantage that no modification of the MD code is required to propagate the dynamics, and unlike with linear alchemical mixing, only one electrostatic evaluation is needed (e.g., single call to particle-mesh Ewald) leading to better performance. In the case of AMBER, this enables all the performance benefits of GPU-acceleration to be realized, in addition to unlocking the full spectrum of features available within the MD software, such as Hamiltonian replica exchange (HREM). The TI derivative evaluation can be accomplished efficiently in a post-processing step by reanalyzing the statistically independent trajectory frames in parallel for high throughput. We also show how one can evaluate the particle mesh Ewald contribution to the TI derivative evaluation without needing to perform two reciprocal space calculations. We apply the PI-TI method with HREM on GPUs in AMBER to predict p K a values in double stranded RNA molecules and make comparison with experiments. Convergence to under 0.25 units for these systems required 100 ns or more of sampling per window and coupling of windows with HREM. We find that MM charges derived from ab initio QM/MM fragment calculations improve the agreement between calculation and experimental results.
Science Museum Exhibits' Summative Evaluation with Knowledge Hierarchy Method
ERIC Educational Resources Information Center
Yasar, Erkan; Gurel, Cem
2016-01-01
It is aimed in this research to measure via knowledge hierarchy the things regarding exhibit themes learned by the visitors of the exhibits and compare them with the purpose that the exhibits are designed for, thereby realizing a summative evaluation of the exhibits by knowledge hierarchy method. The research has been conducted in a children's…
Two-Photon Transitions in Hydrogen-Like Atoms
NASA Astrophysics Data System (ADS)
Martinis, Mladen; Stojić, Marko
Different methods for evaluating two-photon transition amplitudes in hydrogen-like atoms are compared with the improved method of direct summation. Three separate contributions to the two-photon transition probabilities in hydrogen-like atoms are calculated. The first one coming from the summation over discrete intermediate states is performed up to nc(max) = 35. The second contribution from the integration over the continuum states is performed numerically. The third contribution coming from the summation from nc(max) to infinity is calculated in an approximate way using the mean level energy for this region. It is found that the choice of nc(max) controls the numerical error in the calculations and can be used to increase the accuracy of the results much more efficiently than in other methods.
Multipolar Ewald Methods, 2: Applications Using a Quantum Mechanical Force Field
2015-01-01
A fully quantum mechanical force field (QMFF) based on a modified “divide-and-conquer” (mDC) framework is applied to a series of molecular simulation applications, using a generalized Particle Mesh Ewald method extended to multipolar charge densities. Simulation results are presented for three example applications: liquid water, p-nitrophenylphosphate reactivity in solution, and crystalline N,N-dimethylglycine. Simulations of liquid water using a parametrized mDC model are compared to TIP3P and TIP4P/Ew water models and experiment. The mDC model is shown to be superior for cluster binding energies and generally comparable for bulk properties. Examination of the dissociative pathway for dephosphorylation of p-nitrophenylphosphate shows that the mDC method evaluated with the DFTB3/3OB and DFTB3/OPhyd semiempirical models bracket the experimental barrier, whereas DFTB2 and AM1/d-PhoT QM/MM simulations exhibit deficiencies in the barriers, the latter for which is related, in part, to the anomalous underestimation of the p-nitrophenylate leaving group pKa. Simulations of crystalline N,N-dimethylglycine are performed and the overall structure and atomic fluctuations are compared with the experiment and the general AMBER force field (GAFF). The QMFF, which was not parametrized for this application, was shown to be in better agreement with crystallographic data than GAFF. Our simulations highlight some of the application areas that may benefit from using new QMFFs, and they demonstrate progress toward the development of accurate QMFFs using the recently developed mDC framework. PMID:25691830
ERIC Educational Resources Information Center
Hewson, C.
2012-01-01
To address concerns raised regarding the use of online course-based summative assessment methods, a quasi-experimental design was implemented in which students who completed a summative assessment either online or offline were compared on performance scores when using their self-reported "preferred" or "non-preferred" modes.…
ERIC Educational Resources Information Center
Hudson, Swinton
2015-01-01
The movement from summative assessments, although still needed and used, has transitioned to a combination of summative and formative assessments. The focus in universities is no longer the successful completion of course material but the degree to which learning has occurred. Global needs, business input and demands, and generational cohorts have…
Engaging Practical Students through Audio Feedback
ERIC Educational Resources Information Center
Pearson, John
2018-01-01
This paper uses an action research intervention in an attempt to improve student engagement with summative feedback. The intervention delivered summative module feedback to the students as audio recordings, replacing the written method employed in previous years. The project found that students are keen on audio as an alternative to written…
NASA Astrophysics Data System (ADS)
Irwandi, Irwandi; Fashbir; Daryono
2018-04-01
Neo-Deterministic Seismic Hazard Assessment (NDSHA) method is a seismic hazard assessment method that has an advantage on realistic physical simulation of the source, propagation, and geological-geophysical structure. This simulation is capable on generating the synthetics seismograms at the sites that being observed. At the regional NDSHA scale, calculation of the strong ground motion is based on 1D modal summation technique because it is more efficient in computation. In this article, we verify the result of synthetic seismogram calculations with the result of field observations when Pidie Jaya earthquake on 7 December 2016 occurred with the moment magnitude of M6.5. Those data were recorded by broadband seismometers installed by BMKG (Indonesian Agency for Meteorology, Climatology and Geophysics). The result of the synthetic seismogram calculations verifies that some stations well show the suitability with observation while some other stations show the discrepancies with observation results. Based on the results of the observation of some stations, evidently 1D modal summation technique method has been well verified for thin sediment region (near the pre-tertiary basement), but less suitable for thick sediment region. The reason is that the 1D modal summation technique excludes the amplification effect of seismic wave occurring within thick sediment region. So, another approach is needed, e.g., 2D finite difference hybrid method, which is a part of local scale NDSHA method.
All-Atom Continuous Constant pH Molecular Dynamics With Particle Mesh Ewald and Titratable Water.
Huang, Yandong; Chen, Wei; Wallace, Jason A; Shen, Jana
2016-11-08
Development of a pH stat to properly control solution pH in biomolecular simulations has been a long-standing goal in the community. Toward this goal recent years have witnessed the emergence of the so-called constant pH molecular dynamics methods. However, the accuracy and generality of these methods have been hampered by the use of implicit-solvent models or truncation-based electrostatic schemes. Here we report the implementation of the particle mesh Ewald (PME) scheme into the all-atom continuous constant pH molecular dynamics (CpHMD) method, enabling CpHMD to be performed with a standard MD engine at a fractional added computational cost. We demonstrate the performance using pH replica-exchange CpHMD simulations with titratable water for a stringent test set of proteins, HP36, BBL, HEWL, and SNase. With the sampling time of 10 ns per replica, most pK a 's are converged, yielding the average absolute and root-mean-square deviations of 0.61 and 0.77, respectively, from experiment. Linear regression of the calculated vs experimental pK a shifts gives a correlation coefficient of 0.79, a slope of 1, and an intercept near 0. Analysis reveals inadequate sampling of structure relaxation accompanying a protonation-state switch as a major source of the remaining errors, which are reduced as simulation prolongs. These data suggest PME-based CpHMD can be used as a general tool for pH-controlled simulations of macromolecular systems in various environments, enabling atomic insights into pH-dependent phenomena involving not only soluble proteins but also transmembrane proteins, nucleic acids, surfactants, and polysaccharides.
Re-Irradiation of Hepatocellular Carcinoma: Clinical Applicability of Deformable Image Registration.
Lee, Dong Soo; Woo, Joong Yeol; Kim, Jun Won; Seong, Jinsil
2016-01-01
This study aimed to evaluate whether the deformable image registration (DIR) method is clinically applicable to the safe delivery of re-irradiation in hepatocellular carcinoma (HCC). Between August 2010 and March 2012, 12 eligible HCC patients received re-irradiation using helical tomotherapy. The median total prescribed radiation doses at first irradiation and re-irradiation were 50 Gy (range, 36-60 Gy) and 50 Gy (range, 36-58.42 Gy), respectively. Most re-irradiation therapies (11 of 12) were administered to previously irradiated or marginal areas. Dose summation results were reproduced using DIR by rigid and deformable registration methods, and doses of organs-at-risk (OARs) were evaluated. Treatment outcomes were also assessed. Thirty-six dose summation indices were obtained for three OARs (bowel, duodenum, and stomach doses in each patient). There was no statistical difference between the two different types of DIR methods (rigid and deformable) in terms of calculated summation ΣD (0.1 cc, 1 cc, 2 cc, and max) in each OAR. The median total mean remaining liver doses (M(RLD)) in rigid- and deformable-type registration were not statistically different for all cohorts (p=0.248), although a large difference in M(RLD) was observed when there was a significant difference in spatial liver volume change between radiation intervals. One duodenal ulcer perforation developed 20 months after re-irradiation. Although current dose summation algorithms and uncertainties do not warrant accurate dosimetric results, OARs-based DIR dose summation can be usefully utilized in the re-irradiation of HCC. Appropriate cohort selection, watchful interpretation, and selective use of DIR methods are crucial to enhance the radio-therapeutic ratio.
A Comparison of Formative and Summative Evaluation.
ERIC Educational Resources Information Center
Belenski, Mary Jo
Formative and summative evaluations in education are compared, and appropriate uses of these methods in program evaluation are discussed. The main purpose of formative evaluation is to determine a level of mastery of a learning task, along with discovering any part of the task that was not mastered. In other words, formative evaluation focuses the…
On pseudo-spectral time discretizations in summation-by-parts form
NASA Astrophysics Data System (ADS)
Ruggiu, Andrea A.; Nordström, Jan
2018-05-01
Fully-implicit discrete formulations in summation-by-parts form for initial-boundary value problems must be invertible in order to provide well functioning procedures. We prove that, under mild assumptions, pseudo-spectral collocation methods for the time derivative lead to invertible discrete systems when energy-stable spatial discretizations are used.
Formative and Summative Assessment in Higher Education: Opinions and Practices of Instructors
ERIC Educational Resources Information Center
Yüksel, Hidayet Suha; Gündüz, Nevin
2017-01-01
The purpose of this study is to examine opinions of the instructors working in three different universities in Ankara regarding assessment in education and assessment methods they use in their courses within the summative assessment and formative assessment approaches. The population is formed by instructors lecturing in School of Physical…
NASA Astrophysics Data System (ADS)
Kartashov, E. M.
1986-10-01
Analytical methods for solving boundary value problems for the heat conduction equation with heterogeneous boundary conditions on lines, on a plane, and in space are briefly reviewed. In particular, the method of dual integral equations and summator series is examined with reference to stationary processes. A table of principal solutions to dual integral equations and pair summator series is proposed which presents the known results in a systematic manner. Newly obtained results are presented in addition to the known ones.
Summative Self-Assessment in Higher Education: Implications of Its Counting towards the Final Mark
ERIC Educational Resources Information Center
Tejeiro, Ricardo A.; Gomez-Vallecillo, Jorge L.; Romero, Antonio F.; Pelegrina, Manuel; Wallace, Agustin; Emberley, Enrique
2012-01-01
Introduction: Our study aims at assessing the validity of summative criteria-referenced self-assessment in higher education, and in particular, if that validity varies when the professor counts self-assessment toward the final mark. Method: One hundred and twenty-two first year students from two groups in Teacher Education at the Universidad de…
Using Digital Representations of Practical Production Work for Summative Assessment
ERIC Educational Resources Information Center
Newhouse, C. Paul
2014-01-01
This paper presents the findings of the first phase of a three-year study investigating the efficacy of the digitisation of creative practical work as digital portfolios for the purposes of high-stakes summative assessment. At the same time the paired comparisons method of scoring was tried as an alternative to analytical rubric-based marking…
Petersson, N. Anders; Sjogreen, Bjorn
2015-07-20
We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less
Common arc method for diffraction pattern orientation.
Bortel, Gábor; Tegze, Miklós
2011-11-01
Very short pulses of X-ray free-electron lasers opened the way to obtaining diffraction signal from single particles beyond the radiation dose limit. For three-dimensional structure reconstruction many patterns are recorded in the object's unknown orientation. A method is described for the orientation of continuous diffraction patterns of non-periodic objects, utilizing intensity correlations in the curved intersections of the corresponding Ewald spheres, and hence named the common arc orientation method. The present implementation of the algorithm optionally takes into account Friedel's law, handles missing data and is capable of determining the point group of symmetric objects. Its performance is demonstrated on simulated diffraction data sets and verification of the results indicates a high orientation accuracy even at low signal levels. The common arc method fills a gap in the wide palette of orientation methods. © 2011 International Union of Crystallography
BIRTH AND DEATH PROJECTIONS USED IN PRESENT STUDENT-TEACHER POPULATION GROWTH MODELS.
ERIC Educational Resources Information Center
OKADA, TETSUO
A BRIEF DESCRIPTION OF THE METHODOLOGY USED IN DYNAMOD II TO PROJECT BIRTHS AND DEATHS IS PRESENTED. THE COMPUTATION OF DEATH RATES FOLLOWED THE METHOD USED BY THE DEPARTMENT OF HEALTH, EDUCATION AND WELFARE, MORTALITY DIVISION--DEATH RATE FOR AGE INTERVAL I THROUGH J EQUALS SUMMATION OF NUMBER OF DEATHS AT AGES I THROUGH J/SUMMATION OF POPULATION…
ERIC Educational Resources Information Center
Pereyra, Pedro; Robledo-Martinez, Arturo
2009-01-01
We explicitly show that the well-known transmission and reflection amplitudes of planar slabs, obtained via an algebraic summation of Fresnel amplitudes, are completely equivalent to those obtained from transfer matrices in the scattering approach. This equivalence makes the finite periodic systems theory a powerful alternative to the cumbersome…
ERIC Educational Resources Information Center
Pyschny, Verena; Landwehr, Markus; Hahn, Moritz; Lang-Roth, Ruth; Walger, Martin; Meister, Hartmut
2014-01-01
Purpose: The objective of the study was to investigate the influence of noise (energetic) and speech (energetic plus informational) maskers on the head shadow (HS), squelch (SQ), and binaural summation (SU) effect in bilateral and bimodal cochlear implant (CI) users. Method: Speech recognition was measured in the presence of either a competing…
Digital processing of array seismic recordings
Ryall, Alan; Birtill, John
1962-01-01
This technical letter contains a brief review of the operations which are involved in digital processing of array seismic recordings by the methods of velocity filtering, summation, cross-multiplication and integration, and by combinations of these operations (the "UK Method" and multiple correlation). Examples are presented of analyses by the several techniques on array recordings which were obtained by the U.S. Geological Survey during chemical and nuclear explosions in the western United States. Seismograms are synthesized using actual noise and Pn-signal recordings, such that the signal-to-noise ratio, onset time and velocity of the signal are predetermined for the synthetic record. These records are then analyzed by summation, cross-multiplication, multiple correlation and the UK technique, and the results are compared. For all of the examples presented, analysis by the non-linear techniques of multiple correlation and cross-multiplication of the traces on an array recording are preferred to analyses by the linear operations involved in summation and the UK Method.
Wang, Karen E; Fitzpatrick, Caroline; George, David; Lane, Lindsey
2012-01-01
Summative evaluation of medical students is a critical component of the educational process. Despite extensive literature on evaluation, few studies have centered on affiliate faculty members' attitudes toward summative evaluation of students, though it has been suggested that these attitudes influence their effectiveness as evaluators. The objective is to examine affiliate faculty members' attitudes toward clinical clerkship evaluation using primarily qualitative research methods. The study used a nonexperimental research design and employed mixed methods. Data were collected through interviews, focus groups, and a questionnaire from 11 affiliate faculty members. Themes emerging from the data fell into three broad categories: (a) factors that influence grading, (b) consequences of negative evaluations, and (c) disconnections in the grading process. The quantitative portion of the study revealed important discrepancies supporting the use of qualitative methods. The study highlights faculty members' struggles with the evaluative process and emphasizes the need for improvements in evaluation tools and faculty development.
1983-05-01
4401 Vine Grove Road Fort Knox, Kentucky 40121 Telephone: (502) 942-8624 Editor-in-Chief: MAJ C. R. Steiner , Jr. $10.00 Army (M) Association...Wehrkunde) Verlag Europaische Wehrkunde Gmb H Herzog- Rudolf -Str. 1 8000 Munich 22 West Germany Telephone: (089) 293883 Editor: Ewald Heinrich von
Providing Formative Feedback From a Summative Computer-aided Assessment
Sewell, Robert D. E.
2007-01-01
Objectives To examine the effectiveness of providing formative feedback for summative computer-aided assessment. Design Two groups of first-year undergraduate life science students in pharmacy and neuroscience who were studying an e-learning package in a common pharmacology module were presented with a computer-based summative assessment. A sheet with individualized feedback derived from each of the 5 results sections of the assessment was provided to each student. Students were asked via a questionnaire to evaluate the form and method of feedback. Assessment The students were able to reflect on their performance and use the feedback provided to guide their future study or revision. There was no significant difference between the responses from pharmacy and neuroscience students. Students' responses on the questionnaire indicated a generally positive reaction to this form of feedback. Conclusions Findings suggest that additional formative assessment conveyed by this style and method would be appreciated and valued by students. PMID:17533442
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2014-02-28
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2015-01-01
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230
Outside opportunities and costs incurred by others.
Roes, Frans L
2007-07-21
Descriptions of interactions between ants and their 'guests' serve to illustrate the thesis that Ewald's theory of the 'evolution of virulence' not only applies to interactions between micro-organisms causing infectious diseases and their hosts, but also to interactions between individuals belonging to differing species. For instance, the prediction is put forward and discussed that guests of army ants are, relative to guests of other species of ants, more often parasitic. A key variable in Ewald's theory is 'transmissibility'. It shows some resemblance to similar variables used in micro-economic theory and in Emerson's sociological Power-Dependence Relations theory. In this article, this variable is called 'outside opportunities'. In an A-B relation, an outside opportunity for A is anything which constitutes an alternative to what B can provide. It is concluded that in A-B interactions, the more outside opportunities are available to A, the more costs are incurred by B. Differences and similarities between this idea and Game Theory are discussed.
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
Group in-course assessment promotes cooperative learning and increases performance.
Pratten, Margaret K; Merrick, Deborah; Burr, Steven A
2014-01-01
The authors describe and evaluate a method to motivate medical students to maximize the effectiveness of dissection opportunities by using In-Course-Assessments (ICAs) to encourage teamwork. A student's final mark was derived by combining the group dissection mark, group mark for questions, and their individual question mark. An analysis of the impact of the ICA was performed by comparing end of module practical summative marks in student cohorts who had, or had not, participated in the ICAs. Summative marks were compared by two-way ANOVA followed by Dunnets test, or by repeated measures ANOVA, as appropriate. A cohort of medical students was selected that had experienced both practical classes without (year one) and with the new ICA structure (year two). Comparison of summative year one and year two marks illustrated an increased improvement in year two performance in this cohort. A significant increase was also noted when comparing this cohort with five preceding year two cohorts who had not experienced the ICAs (P <0.0001). To ensure that variation in the practical summative examination was not impacting on the data, a comparison was made between three cohorts who had performed the same summative examination. Results show that students who had undertook weekly ICAs showed significantly improved summative marks, compared with those who did not (P <0.0001). This approach to ICA promotes engagement with learning resources in an active, team-based, cooperative learning environment. © 2013 American Association of Anatomists.
NASA Astrophysics Data System (ADS)
Caillol, J. M.; Levesque, D.
1992-01-01
The reliability and the efficiency of a new method suitable for the simulations of dielectric fluids and ionic solutions is established by numerical computations. The efficiency depends on the use of a simulation cell which is the surface of a four-dimensional sphere. The reliability originates from a charge-charge potential solution of the Poisson equation in this confining volume. The computation time, for systems of a few hundred molecules, is reduced by a factor of 2 or 3 compared to this of a simulation performed in a cubic volume with periodic boundary conditions and the Ewald charge-charge potential.
Thin Film Evaporation Model with Retarded Van Der Waals Interaction (Postprint)
2013-11-01
Waals interaction. The retarded van der Waals interaction is derived from Hamaker theory, the summation of retarded pair potentials for all molecules...interaction is derived from Hamaker theory, the summation of retarded pair potentials for all molecules for a given geometry. When combined, the governing...interaction force is the negative derivative with respect to distance of the interaction energy. The method due to Hamaker essentially sums all pair
2012-01-01
Background The Script Concordance Test (SCT) has not been reported in summative assessment of students across the multiple domains of a medical curriculum. We report the steps used to build a test for summative assessment in a medical curriculum. Methods A 51 case, 158-question, multidisciplinary paper was constructed to assess clinical reasoning in 5th-year. 10–16 experts in each of 7 discipline-based reference panels answered questions on-line. A multidisciplinary group considered reference panel data and data from a volunteer group of 6th Years, who sat the same test, to determine the passing score for the 5th Years. Results The mean (SD) scores were 63.6 (7.6) and 68.6 (4.8) for the 6th Year (n = 23, alpha = 0.78) and and 5th Year (n = 132, alpha =0.62) groups (p < 0.05), respectively. The passing score was set at 4 SD from the expert mean. Four students failed. Conclusions The SCT may be a useful method to assess clinical reasoning in medical students in multidisciplinary summative assessments. Substantial investment in training of faculty and students and in the development of questions is required. PMID:22571351
Fast summation of divergent series and resurgent transseries from Meijer-G approximants
NASA Astrophysics Data System (ADS)
Mera, Héctor; Pedersen, Thomas G.; Nikolić, Branislav K.
2018-05-01
We develop a resummation approach based on Meijer-G functions and apply it to approximate the Borel sum of divergent series and the Borel-Écalle sum of resurgent transseries in quantum mechanics and quantum field theory (QFT). The proposed method is shown to vastly outperform the conventional Borel-Padé and Borel-Padé-Écalle summation methods. The resulting Meijer-G approximants are easily parametrized by means of a hypergeometric ansatz and can be thought of as a generalization to arbitrary order of the Borel-hypergeometric method [Mera et al., Phys. Rev. Lett. 115, 143001 (2015), 10.1103/PhysRevLett.115.143001]. Here we demonstrate the accuracy of this technique in various examples from quantum mechanics and QFT, traditionally employed as benchmark models for resummation, such as zero-dimensional ϕ4 theory; the quartic anharmonic oscillator; the calculation of critical exponents for the N -vector model; ϕ4 with degenerate minima; self-interacting QFT in zero dimensions; and the summation of one- and two-instanton contributions in the quantum-mechanical double-well problem.
Dimensional transitions in thermodynamic properties of ideal Maxwell-Boltzmann gases
NASA Astrophysics Data System (ADS)
Aydin, Alhun; Sisman, Altug
2015-04-01
An ideal Maxwell-Boltzmann gas confined in various rectangular nanodomains is considered under quantum size effects. Thermodynamic quantities are calculated from their relations with the partition function, which consists of triple infinite summations over momentum states in each direction. To obtain analytical expressions, summations are converted to integrals for macrosystems by a continuum approximation, which fails at the nanoscale. To avoid both the numerical calculation of summations and the failure of their integral approximations at the nanoscale, a method which gives an analytical expression for a single particle partition function (SPPF) is proposed. It is shown that a dimensional transition in momentum space occurs at a certain magnitude of confinement. Therefore, to represent the SPPF by lower-dimensional analytical expressions becomes possible, rather than numerical calculation of summations. Considering rectangular domains with different aspect ratios, a comparison of the results of derived expressions with those of summation forms of the SPPF is made. It is shown that analytical expressions for the SPPF give very precise results with maximum relative errors of around 1%, 2% and 3% at exactly the transition point for single, double and triple transitions, respectively. Based on dimensional transitions, expressions for free energy, entropy, internal energy, chemical potential, heat capacity and pressure are given analytically valid for any scale.
Vierck, C J; Cannon, R L; Fry, G; Maixner, W; Whitsel, B L
1997-08-01
Temporal summation of sensory intensity was investigated in normal subjects using novel methods of thermal stimulation. A Peltier thermode was heated and then applied in a series of brief (700 ms) contacts to different sites on the glabrous skin of either hand. Repetitive contacts on the thenar or hypothenar eminence, at interstimulus intervals (ISIs) of 3 s, progressively increased the perceived intensity of a thermal sensation that followed each contact at an onset latency > 2 s. Temporal summation of these delayed (late) sensations was proportional to thermode temperature over a range of 45-53 degrees C, progressing from a nonpainful level (warmth) to painful sensations that could be rated as very strong after 10 contacts. Short-latency pain sensations rarely were evoked by such stimuli and never attained levels substantially above pain threshold for the sequences and temperatures presented. Temporal summation produced by brief contacts was greater in rate and amount than increases in sensory intensity resulting from repetitive ramping to the same temperature by a thermode in constant contact with the skin. Variation of the interval between contacts revealed a dependence of sensory intensity on interstimulus interval that is similar to physiological demonstrations of windup, where increasing frequencies of spike train activity are evoked from spinal neurons by repetitive activation of unmyelinated nociceptors. However, substantial summation at repetition rates of > or = 0.33 Hz was observed for temperatures that produced only late sensations of warmth when presented at frequencies < 0.16 Hz. Measurements of subepidermal skin temperature from anesthetized monkeys revealed different time courses for storage and dissipation of heat by the skin than for temporal summation and decay of sensory intensity for the human subjects. For example, negligible heat loss occurred during a 6-s interval between two trials of 10 contacts at 0.33 Hz, but ratings of sensory magnitude decreased from very strong levels of pain to sensations of warmth during the same interval. Evidence that temporal summation of sensory intensity during series of brief contacts relies on central integration, rather than a sensitization of peripheral receptors, was obtained using two approaches. In the first, a moderate degree of temporal summation was observed during alternating stimulation of adjacent but nonoverlapping skin sites at 0.33 Hz. Second, temporal summation was significantly attenuated by prior administration of dextromethorphan, a N-methyl-D-aspartate receptor antagonist.
Using a fuzzy comprehensive evaluation method to determine product usability: A test case
Zhou, Ronggang; Chan, Alan H. S.
2016-01-01
BACKGROUND: In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. OBJECTIVE AND METHODS: In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. RESULTS AND CONCLUSIONS: This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method. PMID:28035942
Using a fuzzy comprehensive evaluation method to determine product usability: A test case.
Zhou, Ronggang; Chan, Alan H S
2017-01-01
In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.
Odor detection of mixtures of homologous carboxylic acids and coffee aroma compounds by humans.
Miyazawa, Toshio; Gallagher, Michele; Preti, George; Wise, Paul M
2009-11-11
Mixture summation among homologous carboxylic acids, that is, the relationship between detection probabilities for mixtures and detection probabilities for their unmixed components, varies with similarity in carbon-chain length. The current study examined detection of acetic, butyric, hexanoic, and octanoic acids mixed with three other model odorants that differ greatly from the acids in both structure and odor character, namely, 2-hydroxy-3-methylcyclopent-2-en-1-one, furan-2-ylmethanethiol, and (3-methyl-3-sulfanylbutyl) acetate. Psychometric functions were measured for both single compounds and binary mixtures (2 of 5, forced-choice method). An air dilution olfactometer delivered stimuli, with vapor-phase calibration using gas chromatography-mass spectrometry. Across the three odorants that differed from the acids, acetic and butyric acid showed approximately additive (or perhaps even supra-additive) summation at low perithreshold concentrations, but subadditive interactions at high perithreshold concentrations. In contrast, the medium-chain acids showed subadditive interactions across a wide range of concentrations. Thus, carbon-chain length appears to influence not only summation with other carboxylic acids but also summation with at least some unrelated compounds.
NASA Technical Reports Server (NTRS)
Bennett, Floyd V.; Yntema, Robert T.
1959-01-01
Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.
NASA Astrophysics Data System (ADS)
Irwandi; Rusydy, Ibnu; Muksin, Umar; Rudyanto, Ariska; Daryono
2018-05-01
Wave vibration confined in the boundary will produce stationary wave solution in discrete states called modes. There are many physics applications related to modal solutions such as air column resonance, string vibration, and emission spectrum of the atomic Hydrogen. Naturally, energy is distributed in several modes so that the complete calculation is obtained from the sum of the whole modes called modal summation. The modal summation technique was applied to simulate the surface wave propagation above crustal structure of the earth. The method is computational because it uses 1D structural model which is not necessary to calculate the overall wave propagation. The simulation results of the magnitude 6.5 Pidie Jaya earthquake show the response spectral of the Summation Technique has a good correlation to the observed seismometer and accelerometer waveform data, especially at the KCSI (Kotacane) station. On the other hand, at the LASI (Langsa) station shows the modal simulation result of response is relatively lower than observation. The lower value of the reaction spectral estimation is obtained because the station is located in the thick sedimentary basin causing the amplification effect. This is the limitation of modal summation technique, and therefore it should be combined with different finite simulation on the 2D local structural model of the basin.
Relationship Between Binocular Summation and Stereoacuity After Strabismus Surgery
KATTAN, Jaffer M.; VELEZ, Federico G.; DEMER, Joseph L.
2016-01-01
Purpose To describe the relationship between binocular summation and stereoacuity after strabismus surgery. Design Prospective Case Series Methods Setting Stein Eye institute, University of California Los Angeles Patient Population Pediatric strabismic patients who underwent strabismus surgery between 2010 and 2015. Observation Procedures Early Treatment Diabetic Retinopathy Study visual acuity, Sloan low-contrast acuity (LCA, 2.5% and 1.25%) and Randot stereoacuity 2 months following surgical correction of strabismus. Main Outcome Measures The relationship between binocular summation, calculated as the difference between the binocular visual acuity score and that of the better eye, and stereoacuity. Results A total of 130 post-operative strabismic patients were studied. The relationship between binocular summation and stereoacuity was studied by Spearman correlation. There were significant correlations between BiS for 2.5% LCA with near and distance stereoacuity (p=0.006 and 0.009). BiS for 1.25% LCA was also significantly correlated with near stereoacuity (p=0.04). Near stereoacuity and BiS for 2.5% and 1.25% LCA were significantly dependent (Pearson Chi Squared, p=0.006 and p=0.026). Patients with stereoacuity demonstrated significantly more BiS in 2.5% LCA of 2.7 (p=0.022) and 3.1 (p=0.014) letters than did those without near or distance stereoacuity, respectively. Conclusions These findings demonstrate that stereopsis and binocular summation are significantly correlated in patients who have undergone surgical correction of strabismus. PMID:26921805
NASA Technical Reports Server (NTRS)
Anderson, L. R.; Miller, R. D.
1979-01-01
The LOADS computer program L218 which calculates dynamic load coefficient matrices utilizing the force summation method is described. The load equations are derived for a flight vehicle in straight and level flight and excited by gusts and/or control motions. In addition, sensor equations are calculated for use with an active control system. The load coefficient matrices are calculated for the following types of loads: (1) translational and rotational accelerations, velocities, and displacements; (2) panel aerodynamic forces; (3) net panel forces; and (4) shears, bending moments, and torsions.
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
ERIC Educational Resources Information Center
Martorana, S. V., Ed.; And Others
This publication contains the text of the main presentations and the highlights of discussion groups from the Ninth Annual Pennsylvania Conference on Postsecondary Occupational Education. The conference theme was "Programming Postsecondary Occupational Education." Ewald Nyquist, the first speaker, delineated the problems faced by…
Torrens, George Edward
2018-01-01
Summative content analysis was used to define methods and heuristics from each case study. The review process was in two parts: (1) A literature review to identify conventional research methods and (2) a summative content analysis of published case studies, based on the identified methods and heuristics to suggest an order and priority of where and when were used. Over 200 research and design methods and design heuristics were identified. From the review of the 20 case studies 42 were identified as being applied. The majority of methods and heuristics were applied in phase two, market choice. There appeared a disparity between the limited numbers of methods frequently used, under 10 within the 20 case studies, when hundreds were available. Implications for Rehabilitation The communication highlights a number of issues that have implication for those involved in assistive technology new product development: •The study defined over 200 well-established research and design methods and design heuristics that are available for use by those who specify and design assistive technology products, which provide a comprehensive reference list for practitioners in the field; •The review within the study suggests only a limited number of research and design methods are regularly used by industrial design focused assistive technology new product developers; and, •Debate is required within the practitioners working in this field to reflect on how a wider range of potentially more effective methods and heuristics may be incorporated into daily working practice.
NASA Astrophysics Data System (ADS)
Caillol, J. M.
1992-01-01
We generalize previous work [J. Chem. Phys. 94, 597 (1991)] on an alternative to the Ewald method for the numerical simulations of Coulomb fluids. This new method consists in using as a simulation cell the three-dimensional surface of a four-dimensional sphere, or hypersphere. Here, we consider the case of polar fluids and electrolyte solutions. We derive all the formal expressions which are needed for numerical simulations of such systems. It includes a derivation of the multipolar interactions on a hypersphere, the expansion of the pair-correlation functions on rotational invariants, the expression of the static dielectric constant of a polar liquid, the expressions of the frequency-dependent conductivity and dielectric constant of an ionic solution, and the derivation of the Stillinger-Lovett sum rules for conductive systems.
Zhou, Ruhong
2004-05-01
A highly parallel replica exchange method (REM) that couples with a newly developed molecular dynamics algorithm particle-particle particle-mesh Ewald (P3ME)/RESPA has been proposed for efficient sampling of protein folding free energy landscape. The algorithm is then applied to two separate protein systems, beta-hairpin and a designed protein Trp-cage. The all-atom OPLSAA force field with an explicit solvent model is used for both protein folding simulations. Up to 64 replicas of solvated protein systems are simulated in parallel over a wide range of temperatures. The combined trajectories in temperature and configurational space allow a replica to overcome free energy barriers present at low temperatures. These large scale simulations reveal detailed results on folding mechanisms, intermediate state structures, thermodynamic properties and the temperature dependences for both protein systems.
Rejecting probability summation for radial frequency patterns, not so Quick!
Baldwin, Alex S; Schmidtmann, Gunnar; Kingdom, Frederick A A; Hess, Robert F
2016-05-01
Radial frequency (RF) patterns are used to assess how the visual system processes shape. They are thought to be detected globally. This is supported by studies that have found summation for RF patterns to be greater than what is possible if the parts were being independently detected and performance only then improved with an increasing number of cycles by probability summation between them. However, the model of probability summation employed in these previous studies was based on High Threshold Theory (HTT), rather than Signal Detection Theory (SDT). We conducted rating scale experiments to investigate the receiver operating characteristics. We find these are of the curved form predicted by SDT, rather than the straight lines predicted by HTT. This means that to test probability summation we must use a model based on SDT. We conducted a set of summation experiments finding that thresholds decrease as the number of modulated cycles increases at approximately the same rate as previously found. As this could be consistent with either additive or probability summation, we performed maximum-likelihood fitting of a set of summation models (Matlab code provided in our Supplementary material) and assessed the fits using cross validation. We find we are not able to distinguish whether the responses to the parts of an RF pattern are combined by additive or probability summation, because the predictions are too similar. We present similar results for summation between separate RF patterns, suggesting that the summation process there may be the same as that within a single RF. Copyright © 2016 Elsevier Ltd. All rights reserved.
Suprathreshold contrast summation over area using drifting gratings.
McDougall, Thomas J; Dickinson, J Edwin; Badcock, David R
2018-04-01
This study investigated contrast summation over area for moving targets applied to a fixed-size contrast pedestal-a technique originally developed by Meese and Summers (2007) to demonstrate strong spatial summation of contrast for static patterns at suprathreshold contrast levels. Target contrast increments (drifting gratings) were applied to either the entire 20% contrast pedestal (a full fixed-size drifting grating), or in the configuration of a checkerboard pattern in which the target increment was applied to every alternate check region. These checked stimuli are known as "Battenberg patterns" and the sizes of the checks were varied (within a fixed overall area), across conditions, to measure summation behavior. Results showed that sensitivity to an increment covering the full pedestal was significantly higher than that for the Battenberg patterns (areal summation). Two observers showed strong summation across all check sizes (0.71°-3.33°), and for two other observers the summation ratio dropped to levels consistent with probability summation once check size reached 2.00°. Therefore, areal summation with moving targets does operate at high contrast, and is subserved by relatively large receptive fields covering a square area extending up to at least 3.33° × 3.33° for some observers. Previous studies in which the spatial structure of the pedestal and target covaried were unable to demonstrate spatial summation, potentially due to increasing amounts of suppression from gain-control mechanisms which increases as pedestal size increases. This study shows that when this is controlled, by keeping the pedestal the same across all conditions, extensive summation can be demonstrated.
Binaural Loudness Summation in the Hearing Impaired.
ERIC Educational Resources Information Center
Hawkins, David B.; And Others
1987-01-01
Binaural loudness summation was measured using three different paradigms with 10 normally hearing and 20 bilaterally symmetrical high-frequency sensorineural hearing loss subjects. Binaural summation increased with presentation level using the loudness matching procedure, with values in the 6-10 dB range. Summation decreased with level using the…
Perceptions and attitudes of formative assessments in middle-school science classes
NASA Astrophysics Data System (ADS)
Chauncey, Penny Denyse
No Child Left Behind mandates utilizing summative assessment to measure schools' effectiveness. The problem is that summative assessment measures students' knowledge without depth of understanding. The goal of public education, however, is to prepare students to think critically at higher levels. The purpose of this study was to examine any difference between formative assessment incorporated in instruction as opposed to the usual, more summative methods in terms of attitudes and academic achievement of middle-school science students. Maslow's theory emphasizes that individuals must have basic needs met before they can advance to higher levels. Formative assessment enables students to master one level at a time. The research questions focused on whether statistically significant differences existed between classrooms using these two types of assessments on academic tests and an attitude survey. Using a quantitative quasi-experimental control-group design, data were obtained from a sample of 430 middle-school science students in 6 classes. One control and 2 experimental classes were assigned to each teacher. Results of the independent t tests revealed academic achievement was significantly greater for groups that utilized formative assessment. No significant difference in attitudes was noted. Recommendations include incorporating formative assessment results with the summative results. Findings from this study could contribute to positive social change by prompting educational stakeholders to examine local and state policies on curriculum as well as funding based on summative scores alone. Use of formative assessment can lead to improved academic success.
The Atomic Origin of the Reflection Law
ERIC Educational Resources Information Center
Prytz, Kjell
2016-01-01
It will be demonstrated how the reflection law may be derived on an atomic basis using the plane wave approximation together with Huygens' principle. The model utilized is based on the electric dipole character of matter originating from its molecular constituents. This approach is not new but has, since it was first introduced by Ewald and Oseen…
On Top of the World: Chevrolet Television Advertising 1955 to 1965.
ERIC Educational Resources Information Center
Wicks, Jan L.
Through a review of data from the Campbell-Ewald Advertising Agency and its creative director, the Chevrolet car company, and a review of the award-winning television commercials, this paper explores the successful relationship between Chevrolet and that agency from 1955 to 1965. Following an introduction and a list the questions asked about both…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poplawski, L; Li, T; Chino, J
Purpose: In brachytherapy, structures surrounding the target have the potential to move between treatments and receive unknown dose. Deformable image registration could overcome challenges through dose accumulation. This study uses two possible deformable dose summation techniques and compares the results to point dose summation currently performed in clinic. Methods: Data for ten patients treated with a Syed template was imported into the MIM software (Cleveland, OH). The deformable registration was applied to structures by masking other image data to a single intensity. The registration flow consisted of the following steps: 1) mask CTs so that each of the structures-of-interest hadmore » one unique intensity; 2) perform applicator — based rigid registration; 3) Perform deformable registration; 4) Refine registration by changing local alignments manually; 5) Repeat steps 1 to 3 until desired structure adequately deformed; 5) Transfer each deformed contours to the first CT. The deformed structure accuracy was determined by a dice similarity coefficient (DSC) comparison with the first fraction. Two dose summation techniques were investigated: a deformation and recalculation on the structure; and a dose deformation and accumulation method. Point doses were used as a comparison value. Results: The Syed deformations have DSC ranging from 0.53 to 0.97 and 0.75 and 0.95 for the bladder and rectum, respectively. For the bladder, contour deformation addition ranged from −34.8% to 0.98% and dose deformation accumulation ranged from −35% to 29.3% difference from clinical calculations. For the rectum, contour deformation addition ranged from −5.2% to 16.9% and the dose deformation accumulation ranged from −29.1% to 15.3% change. Conclusion: Deforming dose for summation leads to different volumetric doses than when dose is recalculated on deformed structures, raising concerns about the accuracy of the deformed dose. DSC alone cannot be used to establish the accuracy of a deformation for brachy dose summation purpose.« less
ERIC Educational Resources Information Center
Agboola, Oluwaseun Omowunmi; Hiatt, Anna C.
2017-01-01
Summative assessments are customarily used to evaluate ultimate student outcomes and typically occur less frequently during instruction than formative assessments. Few studies have examined how the use of summative assessments may influence student learning among at-risk groups of students. Summative assessments are typically used to evaluate how…
NASA Astrophysics Data System (ADS)
Takiguchi, Yu; Toyoda, Haruyoshi
2017-11-01
We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.
NASA Astrophysics Data System (ADS)
Takiguchi, Yu; Toyoda, Haruyoshi
2018-06-01
We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.
3D RISM theory with fast reciprocal-space electrostatics.
Heil, Jochen; Kast, Stefan M
2015-03-21
The calculation of electrostatic solute-solvent interactions in 3D RISM ("three-dimensional reference interaction site model") integral equation theory is recast in a form that allows for a computational treatment analogous to the "particle-mesh Ewald" formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.
NASA Astrophysics Data System (ADS)
Zhan, W.; Sun, Y.
2015-12-01
High frequency strong motion data, especially near field acceleration data, have been recorded widely through different observation station systems among the world. Due to tilting and a lot other reasons, recordings from these seismometers usually have baseline drift problems when big earthquake happens. It is hard to obtain a reasonable and precision co-seismic displacement through simply double integration. Here presents a combined method using wavelet transform and several simple liner procedures. Owning to the lack of dense high rate GNSS data in most of region of the world, we did not contain GNSS data in this method first but consider it as an evaluating mark of our results. This semi-automatic method unpacks a raw signal into two portions, a summation of high ranks and a low ranks summation using a cubic B-spline wavelet decomposition procedure. Independent liner treatments are processed against these two summations, which are then composed together to recover useable and reasonable result. We use data of 2008 Wenchuan earthquake and choose stations with a near GPS recording to validate this method. Nearly all of them have compatible co-seismic displacements when compared with GPS stations or field survey. Since seismometer stations and GNSS stations from observation systems in China are sometimes quite far from each other, we also test this method with some other earthquakes (1999 Chi-Chi earthquake and 2011 Tohoku earthquake). And for 2011 Tohoku earthquake, we will introduce GPS recordings to this combined method since the existence of a dense GNSS systems in Japan.
Influences of gender role and anxiety on sex differences in temporal summation of pain.
Robinson, Michael E; Wise, Emily A; Gagnon, Christine; Fillingim, Roger B; Price, Donald D
2004-03-01
Previous research has consistently shown moderate to large differences between pain reports of men and women undergoing experimental pain testing. These differences have been shown for a variety of types of stimulation. However, only recently have sex differences been demonstrated for temporal summation of second pain. This study examined sex differences in response to temporal summation of second pain elicited by thermal stimulation of the skin. The relative influences of state anxiety and gender role expectations on temporal summation were investigated. Asymptomatic undergraduates (37 women and 30 men) underwent thermal testing of the thenar surface of the hand in a temporal summation protocol. Our results replicated those of Fillingim et al indicating that women showed increased temporal summation compared to men. We extended those findings to demonstrate that temporal summation is influenced by anxiety and gender role stereotypes about pain responding. When anxiety and gender role stereotypes are taken into account, sex is no longer a significant predictor of temporal summation. These findings highlight the contribution of social learning factors in the differences between sexes' pain perception. Results of this study demonstrate that psychosocial variables influence pain mechanisms. Temporal summation was related to gender role expectations of pain and anxiety. These variables explain a significant portion of the differences between men and women's pain processing, and may be related to differences in clinical presentation.
ERIC Educational Resources Information Center
Houston, Don; Thompson, James N.
2017-01-01
Discussions about the relationships between formative and summative assessment have come full circle after decades of debate. For some time formative assessment with its emphasis on feedback to students was promoted as better practice than traditional summative assessment. Summative assessment practices were broadly criticised as distanced from…
NASA Astrophysics Data System (ADS)
Lookadoo, Kathryn L.; Bostwick, Eryn N.; Ralston, Ryan; Elizondo, Francisco Javier; Wilson, Scott; Shaw, Tarren J.; Jensen, Matthew L.
2017-12-01
This study examined the role of formative and summative assessment in instructional video games on student learning and engagement. A 2 (formative feedback: present vs absent) × 2 (summative feedback: present vs absent) factorial design with an offset control (recorded lecture) was conducted to explore the impacts of assessment in video games. A total of 172 undergraduates were randomly assigned to one of four instructional video game conditions or the control. Results found that knowledge significantly increased from the pretest for players in all game conditions. Participants in summative assessment conditions learned more than players without summative assessment. In terms of engagement outcomes, formative assessment conditions did not significantly produce better learning engagement outcomes than conditions without formative assessment. However, summative assessment conditions were associated with higher temporal disassociation than non-summative conditions. Implications for future instructional video game development and testing are discussed in the paper.
First-principles calculations of shear moduli for Monte Carlo-simulated Coulomb solids
NASA Technical Reports Server (NTRS)
Ogata, Shuji; Ichimaru, Setsuo
1990-01-01
The paper presents a first-principles study of the shear modulus tensor for perfect and imperfect Coulomb solids. Allowance is made for the effects of thermal fluctuations for temperatures up to the melting conditions. The present theory treats the cases of the long-range Coulomb interaction, where volume fluctuations should be avoided in the Ewald sums.
Discovery and development of x-ray diffraction
NASA Astrophysics Data System (ADS)
Jeong, Yeuncheol; Yin, Ming; Datta, Timir
2013-03-01
In 1912 Max Laue at University of Munich reasoned x-rays to be short wavelength electromagnetic waves and figured interference would occur when scattered off crystals. Arnold Sommerfeld, W. Wien, Ewald and others, raised objections to Laue's idea, but soon Walter Friedrich succeeded in recording x-ray interference patterns off copper sulfate crystals. But the Laue-Ewald's 3-dimensional formula predicted excess spots. Fewer spots were observed. William Lawrence Bragg then 22 year old studying at Cambridge University heard the Munich results from father William Henry Brag, physics professor at Univ of Leeds. Lawrence figured the spots are 2-d interference of x-ray wavelets reflecting off successive atomic planes and derived a simple eponymous equation, the Bragg equation d*sin(theta) = n*lamda. 1913 onward the Braggs dominated the crystallography. Max Laue was awarded the physics Nobel in 1914 and the Braggs shared the same in 1915. Starting with Rontgen's first ever prize in 1901, the importance of x-ray techniques is evident from the four out of a total 16 physics Nobels between 1901-1917. We will outline the historical back ground and importance of x-ray diffraction giving rise to techniques that even in 2013, remain work horses in laboratories all over the globe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.
Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less
A fast summation method for oscillatory lattice sums
NASA Astrophysics Data System (ADS)
Denlinger, Ryan; Gimbutas, Zydrunas; Greengard, Leslie; Rokhlin, Vladimir
2017-02-01
We present a fast summation method for lattice sums of the type which arise when solving wave scattering problems with periodic boundary conditions. While there are a variety of effective algorithms in the literature for such calculations, the approach presented here is new and leads to a rigorous analysis of Wood's anomalies. These arise when illuminating a grating at specific combinations of the angle of incidence and the frequency of the wave, for which the lattice sums diverge. They were discovered by Wood in 1902 as singularities in the spectral response. The primary tools in our approach are the Euler-Maclaurin formula and a steepest descent argument. The resulting algorithm has super-algebraic convergence and requires only milliseconds of CPU time.
Reflections on Academics' Assessment Literacy
ERIC Educational Resources Information Center
Lees, Rebecca; Anderson, Deborah
2015-01-01
This small-scale, mixed-methods study aims to investigate academics' understanding of formative and summative assessment methods and how assessment literacy impacts on their teaching methods. Six semi-structured interviews and a scrutiny of assessments provided the data and results suggest that while these academics understand summative…
Revisiting and Extending Interface Penalties for Multi-Domain Summation-by-Parts Operators
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Nordstrom, Jan; Gottlieb, David
2007-01-01
General interface coupling conditions are presented for multi-domain collocation methods, which satisfy the summation-by-parts (SBP) spatial discretization convention. The combined interior/interface operators are proven to be L2 stable, pointwise stable, and conservative, while maintaining the underlying accuracy of the interior SBP operator. The new interface conditions resemble (and were motivated by) those used in the discontinuous Galerkin finite element community, and maintain many of the same properties. Extensive validation studies are presented using two classes of high-order SBP operators: 1) central finite difference, and 2) Legendre spectral collocation.
Visual summation in night-flying sweat bees: a theoretical study.
Theobald, Jamie Carroll; Greiner, Birgit; Wcislo, William T; Warrant, Eric J
2006-07-01
Bees are predominantly diurnal; only a few groups fly at night. An evolutionary limitation that bees must overcome to inhabit dim environments is their eye type: bees possess apposition compound eyes, which are poorly suited to vision in dim light. Here, we theoretically examine how nocturnal bees Megalopta genalis fly at light levels usually reserved for insects bearing more sensitive superposition eyes. We find that neural summation should greatly increase M. genalis's visual reliability. Predicted spatial summation closely matches the morphology of laminal neurons believed to mediate such summation. Improved reliability costs acuity, but dark adapted bees already suffer optical blurring, and summation further degrades vision only slightly.
Deane, Richard P; Joyce, Pauline; Murphy, Deirdre J
2015-10-09
Team Objective Structured Bedside Assessment (TOSBA) is a learning approach in which a team of medical students undertake a set of structured clinical tasks with real patients in order to reach a diagnosis and formulate a management plan and receive immediate feedback on their performance from a facilitator. TOSBA was introduced as formative assessment to an 8-week undergraduate teaching programme in Obstetrics and Gynaecology (O&G) in 2013/14. Each student completed 5 TOSBA sessions during the rotation. The aim of the study was to evaluate TOSBA as a teaching method to provide formative assessment for medical students during their clinical rotation. The research questions were: Does TOSBA improve clinical, communication and/or reasoning skills? Does TOSBA provide quality feedback? A prospective cohort study was conducted over a full academic year (2013/14). The study used 2 methods to evaluate TOSBA as a teaching method to provide formative assessment: (1) an online survey of TOSBA at the end of the rotation and (2) a comparison of the student performance in TOSBA with their performance in the final summative examination. During the 2013/14 academic year, 157 students completed the O&G programme and the final summative examination . Each student completed the required 5 TOSBA tasks. The response rate to the student survey was 68 % (n = 107/157). Students reported that TOSBA was a beneficial learning experience with a positive impact on clinical, communication and reasoning skills. Students rated the quality of feedback provided by TOSBA as high. Students identified the observation of the performance and feedback of other students within their TOSBA team as key features. High achieving students performed well in both TOSBA and summative assessments. The majority of students who performed poorly in TOSBA subsequently passed the summative assessments (n = 20/21, 95 %). Conversely, the majority of students who failed the summative assessments had satisfactory scores in TOSBA (n = 6/7, 86 %). TOSBA has a positive impact on the clinical, communication and reasoning skills of medical students through the provision of high-quality feedback. The use of structured pre-defined tasks, the observation of the performance and feedback of other students and the use of real patients are key elements of TOSBA. Avoiding student complacency and providing accurate feedback from TOSBA are on-going challenges.
TakeTwo: an indexing algorithm suited to still images with known crystal parameters
Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; Evans, Gwyndaf; Sauter, Nicholas K.; Ernst, Oliver; Meents, Alke; Mueller-Werkmeister, Henrike; Miller, R. J. Dwayne; Stuart, David Ian
2016-01-01
The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallography experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and space group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image. PMID:27487826
TakeTwo: an indexing algorithm suited to still images with known crystal parameters
Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; ...
2016-08-01
The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallography experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and spacemore » group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image.« less
Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulz, Roland; Lindner, Benjamin; Petridis, Loukas
2009-01-01
A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less
Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.
Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C
2009-10-13
A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.
Near-peer education: a novel teaching program
Premnath, Daphne
2016-01-01
Objectives This study aims to: 1) Evaluate whether a near-peer program improves perceived OSCE performance; 2) Identify factors motivating students to teach; 3) Evaluate role of near-peer teaching in medical education. Methods A near-peer OSCE teaching program was implemented at Monash University’s Peninsula Clinical School over the 2013 academic year. Forty 3rd-year and thirty final-year medical students were recruited as near-peer learners and educators, respectively. A post-program questionnaire was completed by learners prior to summative OSCEs (n=31), followed by post-OSCE focus groups (n=10). Near-peer teachers were interviewed at the program’s conclusion (n=10). Qualitative data was analysed for emerging themes to assess the perceived value of the program. Results Learners felt peer-led teaching was more relevant to assessment, at an appropriate level of difficulty and delivered in a less threatening environment than other methods of teaching. They valued consistent practice and felt confident approaching their summative OSCEs. Educators enjoyed the opportunity to develop their teaching skills, citing mutual benefit and gratitude to past peer-educators as strong motivators to teach others. Conclusions Near-peer education, valued by near-peer learners and teachers alike, was a useful method to improve preparation and perceived performance in summative examinations. In particular, a novel year-long, student-run initiative was regarded as a valuable and feasible adjunct to faculty teaching. PMID:27239951
Nanometric summation architecture based on optical near-field interaction between quantum dots.
Naruse, Makoto; Miyazaki, Tetsuya; Kubota, Fumito; Kawazoe, Tadashi; Kobayashi, Kiyoshi; Sangu, Suguru; Ohtsu, Motoichi
2005-01-15
A nanoscale data summation architecture is proposed and experimentally demonstrated based on the optical near-field interaction between quantum dots. Based on local electromagnetic interactions between a few nanometric elements via optical near fields, we can combine multiple excitations at a certain quantum dot, which allows construction of a summation architecture. Summation plays a key role for content-addressable memory, which is one of the most important functions in optical networks.
Protein-membrane electrostatic interactions: Application of the Lekner summation technique
NASA Astrophysics Data System (ADS)
Juffer, André H.; Shepherd, Craig M.; Vogel, Hans J.
2001-01-01
A model has been developed to calculate the electrostatic interaction between biomolecules and lipid bilayers. The effect of ionic strength is included by means of explicit ions, while water is described as a background continuum. The bilayer is considered at the atomic level. The Lekner summation technique is employed to calculate the long-range electrostatic interactions. The new method is employed to estimate the electrostatic contribution to the free energy of binding of sandostatin, a cyclic eight-residue analogue of the peptide hormone somatostatin, to lipid bilayers with thermodynamic integration. Monte Carlo simulation techniques were employed to determine ion distributions and peptide orientations. Both neutral as well as negatively charged lipid bilayers were used. An error analysis to judge the quality of the computation is also presented. The applicability of the Lekner summation technique to combine it with computer simulation models that simulate the adsorption of peptides (and proteins) into the interfacial region of lipid bilayers is discussed.
Further summation formulae related to generalized harmonic numbers
NASA Astrophysics Data System (ADS)
Zheng, De-Yin
2007-11-01
By employing the univariate series expansion of classical hypergeometric series formulae, Shen [L.-C. Shen, Remarks on some integrals and series involving the Stirling numbers and [zeta](n), Trans. Amer. Math. Soc. 347 (1995) 1391-1399] and Choi and Srivastava [J. Choi, H.M. Srivastava, Certain classes of infinite series, Monatsh. Math. 127 (1999) 15-25; J. Choi, H.M. Srivastava, Explicit evaluation of Euler and related sums, Ramanujan J. 10 (2005) 51-70] investigated the evaluation of infinite series related to generalized harmonic numbers. More summation formulae have systematically been derived by Chu [W. Chu, Hypergeometric series and the Riemann Zeta function, Acta Arith. 82 (1997) 103-118], who developed fully this approach to the multivariate case. The present paper will explore the hypergeometric series method further and establish numerous summation formulae expressing infinite series related to generalized harmonic numbers in terms of the Riemann Zeta function [zeta](m) with m=5,6,7, including several known ones as examples.
NASA Astrophysics Data System (ADS)
Ingersoll, A. P.; Nakajima, M.; Ewald, S.; Gao, P.
2015-12-01
Postberg et al (2009) argued that the observed plume activity requires large vapor chambers above the evaporating liquid (left figure). Here we argue that large vapor chambers are unnecessary, and that a liquid-filled crack 1 meter wide extending along the 500 km length of the tiger stripes would be an adequate source (right figure). We consider controlled boiling (companion paper by Nakajima and Ingersoll 2015AGU) regulated by friction between the gas and the walls. Postberg et al use formulas from Rayleigh-Benard convection, which we argue does not apply when bubbles are transferring their latent heat across the liquid-gas interface. We show that modest convection currents in the liquid (few cm/s) can supply energy to the boiling zone and prevent it from freezing. Hedman et al (2013) reported brightness variations with orbital phase, but they also reported that their 2005 observations were roughly 50% higher than the 2009 observations. Here we extend the observation period to 2015 (Ingersoll and Ewald 2015). Our analysis relies on ISS images whereas Hedman et al rely on VIMS near-IR images, which have 40 times lower resolution. We successfully separate the brightness of the plume from the E-ring background. Our earlier analysis of the particle size distribution (Ingersoll and Ewald 2011) allows us to correct for differences in scattering angle. We confirm a general decline in activity over the 10-year period, but we find hints of fluctuations on shorter time scales. Kempf (Cassini project science meeting, Jan 22, 2015) reported that the mass of particles in the plumes could be an order of magnitude less than that reported by Ingersoll and Ewald (2011). Kempf used in situ particle measurements by CDA, whereas I&E used brightness observations and the assumption that the particles are solid ice. Here we show (Gao et al 2015AGU) that fractal aggregates fit the brightness data just as well as solid ice, and are consistent with the lower mass reported by Kempf.
Fast integral methods for integrated optical systems simulations: a review
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.
2015-09-01
Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non-functional profiles, very deep ones, very large ones compared to wavelength, or simple smooth profiles. This integral method with either trigonometric or spline collocation, iterative solver with O(N2) complexity, named IESMP, was significantly improved by an efficient mesh refinement, matrix preconditioning, Ewald summation method, and an exponentially convergent quadrature in 2006 by G. Schmidt and A. Rathsfeld from Weierstrass-Institute (WIAS) Berlin. The so-called modified integral method (MIM) is a modification of the IEM of D. Maystre and has been introduced by L. Goray in 1995. It has been improved for weak convergence problems in 2001 and it was the only commercial available integral method for a long time, known as PCGRATE. All referenced integral methods so far are for in-plane diffraction only, no conical diffraction was possible. The first integral method for gratings in conical mounting was developed and proven under very weak conditions by G. Schmidt (WIAS) in 2010. It works for separated interfaces and for inclusions as well as for interpenetrating interfaces and for a large number of thin and thick layers in the same stable way. This very fast method has then been implemented for parallel processing under Unix and Windows operating systems. This work gives an overview over the most important BIMs for grating diffraction. It starts by presenting the historical evolution of the methods, highlights their advantages and differences, and gives insight into new approaches and their achievements. It addresses future open challenges at the end.
Holographic 3D multi-spot two-photon excitation for fast optical stimulation in brain
NASA Astrophysics Data System (ADS)
Takiguchi, Yu; Toyoda, Haruyoshi
2017-04-01
We report here a holographic high speed accessing microscope of sensory-driven synaptic activity across all inputs to single living neurons in the context of the intact cerebral cortex. This system is based on holographic multiple beam generation with spatial light modulator, we have demonstrated performance of the holographic excitation efficiency in several in vitro prototype system. 3D weighted iterative Fourier Transform method using the Ewald sphere in consideration of calculation speed has been adopted; multiple locations can be patterned in 3D with single hologram. Standard deviation of intensities of spots are still large due to the aberration of the system and/or hologram calculation, we successfully excited multiple locations of neurons in living mouse brain to monitor the calcium signals.
Marsili, Simone; Signorini, Giorgio Federico; Chelli, Riccardo; Marchi, Massimo; Procacci, Piero
2010-04-15
We present the new release of the ORAC engine (Procacci et al., Comput Chem 1997, 18, 1834), a FORTRAN suite to simulate complex biosystems at the atomistic level. The previous release of the ORAC code included multiple time steps integration, smooth particle mesh Ewald method, constant pressure and constant temperature simulations. The present release has been supplemented with the most advanced techniques for enhanced sampling in atomistic systems including replica exchange with solute tempering, metadynamics and steered molecular dynamics. All these computational technologies have been implemented for parallel architectures using the standard MPI communication protocol. ORAC is an open-source program distributed free of charge under the GNU general public license (GPL) at http://www.chim.unifi.it/orac. 2009 Wiley Periodicals, Inc.
OSCE as a Summative Assessment Tool for Undergraduate Students of Surgery-Our Experience.
Joshi, M K; Srivastava, A K; Ranjan, P; Singhal, M; Dhar, A; Chumber, S; Parshad, R; Seenu, V
2017-12-01
Traditional examination has inherent deficiencies. Objective Structured Clinical Examination (OSCE) is considered as a method of assessment that may overcome many such deficits. OSCE is being increasingly used worldwide in various medical specialities for formative and summative assessment. Although it is being used in various disciplines in our country as well, its use in the stream of general surgery is scarce. We report our experience of assessment of undergraduate students appearing in their pre-professional examination in the subject of general surgery by conducting OSCE. In our experience, OSCE was considered a better assessment tool as compared to the traditional method of examination by both faculty and students and is acceptable to students and faculty alike. Conducting OSCE is feasible for assessment of students of general surgery.
NASA Technical Reports Server (NTRS)
Miller, R. D.; Anderson, L. R.
1979-01-01
The LOADS program L218, a digital computer program that calculates dynamic load coefficient matrices utilizing the force summation method, is described. The load equations are derived for a flight vehicle in straight and level flight and excited by gusts and/or control motions. In addition, sensor equations are calculated for use with an active control system. The load coefficient matrices are calculated for the following types of loads: translational and rotational accelerations, velocities, and displacements; panel aerodynamic forces; net panel forces; shears and moments. Program usage and a brief description of the analysis used are presented. A description of the design and structure of the program to aid those who will maintain and/or modify the program in the future is included.
Phase measurement error in summation of electron holography series.
McLeod, Robert A; Bergen, Michael; Malac, Marek
2014-06-01
Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
The solution of transcendental equations
NASA Technical Reports Server (NTRS)
Agrawal, K. M.; Outlaw, R.
1973-01-01
Some of the existing methods to globally approximate the roots of transcendental equations namely, Graeffe's method, are studied. Summation of the reciprocated roots, Whittaker-Bernoulli method, and the extension of Bernoulli's method via Koenig's theorem are presented. The Aitken's delta squared process is used to accelerate the convergence. Finally, the suitability of these methods is discussed in various cases.
Endo, Yuka; Maddukuri, Prasad V; Vieira, Marcelo L C; Pandian, Natesa G; Patel, Ayan R
2006-11-01
Measurement of right ventricular (RV) volumes and right ventricular ejection fraction (RVEF) by three-dimensional echocardiographic (3DE) short-axis disc summation method has been validated in multiple studies. However, in some patients, short-axis images are of insufficient quality for accurate tracing of the RV endocardial border. This study examined the accuracy of long-axis analysis in multiple planes (longitudinal axial plane method) for assessment of RV volumes and RVEF. 3DE images were analyzed in 40 subjects with a broad range of RV function. RV end-diastolic (RVEDV) and end-systolic volumes (RVESV) and RVEF were calculated by both short-axis disc summation method and longitudinal axial plane method. Excellent correlation was obtained between the two methods for RVEDV, RVESV, and RVEF (r = 0.99, 0.99, 0.94, respectively; P < 0.0001 for all comparisons). 3DE longitudinal-axis analysis is a promising technique for the evaluation of RV function, and may provide an alternative method of assessment in patients with suboptimal short-axis images.
Time-stable overset grid method for hyperbolic problems using summation-by-parts operators
NASA Astrophysics Data System (ADS)
Sharan, Nek; Pantano, Carlos; Bodony, Daniel J.
2018-05-01
A provably time-stable method for solving hyperbolic partial differential equations arising in fluid dynamics on overset grids is presented in this paper. The method uses interface treatments based on the simultaneous approximation term (SAT) penalty method and derivative approximations that satisfy the summation-by-parts (SBP) property. Time-stability is proven using energy arguments in a norm that naturally relaxes to the standard diagonal norm when the overlap reduces to a traditional multiblock arrangement. The proposed overset interface closures are time-stable for arbitrary overlap arrangements. The information between grids is transferred using Lagrangian interpolation applied to the incoming characteristics, although other interpolation schemes could also be used. The conservation properties of the method are analyzed. Several one-, two-, and three-dimensional, linear and non-linear numerical examples are presented to confirm the stability and accuracy of the method. A performance comparison between the proposed SAT-based interface treatment and the commonly-used approach of injecting the interpolated data onto each grid is performed to highlight the efficacy of the SAT method.
Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.; ...
2016-04-01
Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less
A brief simulation intervention increasing basic science and clinical knowledge.
Sheakley, Maria L; Gilbert, Gregory E; Leighton, Kim; Hall, Maureen; Callender, Diana; Pederson, David
2016-01-01
Background The United States Medical Licensing Examination (USMLE) is increasing clinical content on the Step 1 exam; thus, inclusion of clinical applications within the basic science curriculum is crucial. Including simulation activities during basic science years bridges the knowledge gap between basic science content and clinical application. Purpose To evaluate the effects of a one-off, 1-hour cardiovascular simulation intervention on a summative assessment after adjusting for relevant demographic and academic predictors. Methods This study was a non-randomized study using historical controls to evaluate curricular change. The control group received lecture (n l =515) and the intervention group received lecture plus a simulation exercise (n l+s =1,066). Assessment included summative exam questions (n=4) that were scored as pass/fail (≥75%). USMLE-style assessment questions were identical for both cohorts. Descriptive statistics for variables are presented and odds of passage calculated using logistic regression. Results Undergraduate grade point ratio, MCAT-BS, MCAT-PS, age, attendance at an academic review program, and gender were significant predictors of summative exam passage. Students receiving the intervention were significantly more likely to pass the summative exam than students receiving lecture only (P=0.0003). Discussion Simulation plus lecture increases short-term understanding as tested by a written exam. A longitudinal study is needed to assess the effect of a brief simulation intervention on long-term retention of clinical concepts in a basic science curriculum.
Krasne, Sally; Wimmers, Paul F; Relan, Anju; Drake, Thomas A
2006-05-01
Formative assessments are systematically designed instructional interventions to assess and provide feedback on students' strengths and weaknesses in the course of teaching and learning. Despite their known benefits to student attitudes and learning, medical school curricula have been slow to integrate such assessments into the curriculum. This study investigates how performance on two different modes of formative assessment relate to each other and to performance on summative assessments in an integrated, medical-school environment. Two types of formative assessment were administered to 146 first-year medical students each week over 8 weeks: a timed, closed-book component to assess factual recall and image recognition, and an un-timed, open-book component to assess higher order reasoning including the ability to identify and access appropriate resources and to integrate and apply knowledge. Analogous summative assessments were administered in the ninth week. Models relating formative and summative assessment performance were tested using Structural Equation Modeling. Two latent variables underlying achievement on formative and summative assessments could be identified; a "formative-assessment factor" and a "summative-assessment factor," with the former predicting the latter. A latent variable underlying achievement on open-book formative assessments was highly predictive of achievement on both open- and closed-book summative assessments, whereas a latent variable underlying closed-book assessments only predicted performance on the closed-book summative assessment. Formative assessments can be used as effective predictive tools of summative performance in medical school. Open-book, un-timed assessments of higher order processes appeared to be better predictors of overall summative performance than closed-book, timed assessments of factual recall and image recognition.
Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.
1957-10-01
The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.
Summation of power series in particle physics
NASA Astrophysics Data System (ADS)
Fischer, Jan
1999-04-01
The large-order behaviour of power series used in quantum theory (perturbation series and the operator-product expansion) is discussed and relevant summation methods are reviewed. It is emphasised that, in most physically interesting situations, the mere knowledge of the expansion coefficients is not sufficient for a unique determination of the function expanded, and the necessity of some additional, extra-perturbative, input is pointed out. Several possible nonperturbative inputs are suggested. Applications to various problems of quantum chromodynamics are considered. This lecture was presented on the special Memorial Day dedicated to Professor Ryszard R˛czka at this Workshop. The last section is devoted to my personal recollections of this remarkable personality.
Fractional discrete-time consensus models for single- and double-summator dynamics
NASA Astrophysics Data System (ADS)
Wyrwas, Małgorzata; Mozyrska, Dorota; Girejko, Ewa
2018-04-01
The leader-following consensus problem of fractional-order multi-agent discrete-time systems is considered. In the systems, interactions between opinions are defined like in Krause and Cucker-Smale models but the memory is included by taking the fractional-order discrete-time operator on the left-hand side of the nonlinear systems. In this paper, we investigate fractional-order models of opinions for the single- and double-summator dynamics of discrete-time by analytical methods as well as by computer simulations. The necessary and sufficient conditions for the leader-following consensus are formulated by proposing a consensus control law for tracking the virtual leader.
Formative and Summative Evaluation: Related Issues in Performance Measurement.
ERIC Educational Resources Information Center
Wholey, Joseph S.
1996-01-01
Performance measurement can serve both formative and summative evaluation functions. Formative evaluation is typically more useful for government purposes whereas performance measurement is more useful than one-shot evaluations of either formative or summative nature. Evaluators should study performance measurement through case studies and…
Rutherford, Alexandra
2003-11-01
Behaviorist B.F. Skinner is not typically associated with the fields of personality assessment or projective testing. However, early in his career Skinner developed an instrument he named the verbal summator, which, at one point, he referred to as a device for "snaring out complexes," much like an auditory analogue of the Rorschach inkblots. Skinner's interest in the projective potential of his technique was relatively short lived, but whereas he used the verbal summator to generate experimental data for his theory of verbal behavior, several other clinicians and researchers exploited this potential and adapted the verbal summator technique for both research and applied purposes. The idea of an auditory inkblot struck many as a useful innovation, and the verbal summator spawned the tautophone test, the auditory apperception test, and the Azzageddi test, among others. This article traces the origin, development, and eventual demise of the verbal summator as an auditory projective technique.
Media Teleconference: NOAA climate forecaster to discuss status of El Niño
Media Contact NOAA HQ John Ewald 240-429-6127 NOAA NCEI Katy Matthews 828-257-3136 NASA GISS Michael Cabbage/ Leslie McCarthy 212-678-5516 / 5507 NASA HQ Steve Cole 202-358-0918 Wednesday: NOAA, NASA to experts from NOAA and NASA will announce new data on 2015 global temperatures during a media
A Methodology Inventory for Composition Education.
ERIC Educational Resources Information Center
Donlan, Dan
1979-01-01
Illustrates one method for describing changes in classroom behavior of composition teachers: a methodology inventory that may be used as an indicator of expertise, a needs assessment, and a summative self-evaluation. (DD)
MOLSIM: A modular molecular simulation software
Jurij, Reščič
2015-01-01
The modular software MOLSIM for all‐atom molecular and coarse‐grained simulations is presented with focus on the underlying concepts used. The software possesses four unique features: (1) it is an integrated software for molecular dynamic, Monte Carlo, and Brownian dynamics simulations; (2) simulated objects are constructed in a hierarchical fashion representing atoms, rigid molecules and colloids, flexible chains, hierarchical polymers, and cross‐linked networks; (3) long‐range interactions involving charges, dipoles and/or anisotropic dipole polarizabilities are handled either with the standard Ewald sum, the smooth particle mesh Ewald sum, or the reaction‐field technique; (4) statistical uncertainties are provided for all calculated observables. In addition, MOLSIM supports various statistical ensembles, and several types of simulation cells and boundary conditions are available. Intermolecular interactions comprise tabulated pairwise potentials for speed and uniformity and many‐body interactions involve anisotropic polarizabilities. Intramolecular interactions include bond, angle, and crosslink potentials. A very large set of analyses of static and dynamic properties is provided. The capability of MOLSIM can be extended by user‐providing routines controlling, for example, start conditions, intermolecular potentials, and analyses. An extensive set of case studies in the field of soft matter is presented covering colloids, polymers, and crosslinked networks. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25994597
NASA Astrophysics Data System (ADS)
Earle, Sarah
2014-05-01
Background:Since the discontinuation of Standard Attainment Tests (SATs) in science at age 11 in England, pupil performance data in science reported to the UK government by each primary school has relied largely on teacher assessment undertaken in the classroom. Purpose:The process by which teachers are making these judgements has been unclear, so this study made use of the extensive Primary Science Quality Mark (PSQM) database to obtain a 'snapshot' (as of March 2013) of the approaches taken by 91 English primary schools to the formative and summative assessment of pupils' learning in science.
Correlation between safety assessments in the driver-car interaction design process.
Broström, Robert; Bengtsson, Peter; Axelsson, Jakob
2011-05-01
With the functional revolution in modern cars, evaluation methods to be used in all phases of driver-car interaction design have gained importance. It is crucial for car manufacturers to discover and solve safety issues early in the interaction design process. A current problem is thus to find a correlation between the formative methods that are used during development and the summative methods that are used when the product has reached the customer. This paper investigates the correlation between efficiency metrics from summative and formative evaluations, where the results of two studies on sound and navigation system tasks are compared. The first, an analysis of the J.D. Power and Associates APEAL survey, consists of answers given by about two thousand customers. The second, an expert evaluation study, was done by six evaluators who assessed the layouts by task completion time, TLX and Nielsen heuristics. The results show a high degree of correlation between the studies in terms of task efficiency, i.e. between customer ratings and task completion time, and customer ratings and TLX. However, no correlation was observed between Nielsen heuristics and customer ratings, task completion time or TLX. The results of the studies introduce a possibility to develop a usability evaluation framework that includes both formative and summative approaches, as the results show a high degree of consistency between the different methodologies. Hence, combining a quantitative approach with the expert evaluation method, such as task completion time, should be more useful for driver-car interaction design. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Proposal of an environmental performance index to assess solid waste treatment technologies.
Coelho, Hosmanny Mauro Goulart; Lange, Liséte Celina; Coelho, Lineker Max Goulart
2012-07-01
Although the concern with sustainable development and environment protection has considerably grown in the last years it is noted that the majority of decision making models and tools are still either excessively tied to economic aspects or geared to the production process. Moreover, existing models focus on the priority steps of solid waste management, beyond waste energy recovery and disposal. So, in order to help the lack of models and tools aiming at the waste treatment and final disposal, a new concept is proposed: the Cleaner Treatment, which is based on the Cleaner Production principles. This paper focuses on the development and validation of the Cleaner Treatment Index (CTI), to assess environmental performance of waste treatment technologies based on the Cleaner Treatment concept. The index is formed by aggregation (summation or product) of several indicators that consists in operational parameters. The weights of the indicator were established by Delphi Method and Brazilian Environmental Laws. In addition, sensitivity analyses were carried out comparing both aggregation methods. Finally, index validation was carried out by applying the CTI to 10 waste-to-energy plants data. From sensitivity analysis and validation results it is possible to infer that summation model is the most suitable aggregation method. For summation method, CTI results were superior to 0.5 (in a scale from 0 to 1) for most facilities evaluated. So, this study demonstrates that CTI is a simple and robust tool to assess and compare the environmental performance of different treatment plants being an excellent quantitative tool to support Cleaner Treatment implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Supplementing Summative Findings with Formative Data.
ERIC Educational Resources Information Center
Noggle, Nelson L.
This paper attempts to provide evaluators, administrators, and policy makers with the advantages of and methodology of merging formative and summative data to enhance summative evaluations. It draws on RMC Research Corporation's 1980-81 California Statewide Evaluation of Migrant Education. The concern that evaluations typically fail to obtain the…
NASA Astrophysics Data System (ADS)
Sun, Jun-Wei; Shen, Yi; Zhang, Guo-Dong; Wang, Yan-Feng; Cui, Guang-Zhao
2013-04-01
According to the Lyapunov stability theorem, a new general hybrid projective complete dislocated synchronization scheme with non-derivative and derivative coupling based on parameter identification is proposed under the framework of drive-response systems. Every state variable of the response system equals the summation of the hybrid drive systems in the previous hybrid synchronization. However, every state variable of the drive system equals the summation of the hybrid response systems while evolving with time in our method. Complete synchronization, hybrid dislocated synchronization, projective synchronization, non-derivative and derivative coupling, and parameter identification are included as its special item. The Lorenz chaotic system, Rössler chaotic system, memristor chaotic oscillator system, and hyperchaotic Lü system are discussed to show the effectiveness of the proposed methods.
Group Learning Assessment: Developing a Theory-Informed Analytics
ERIC Educational Resources Information Center
Xing, Wanli; Wadholm, Robert; Petakovic, Eva; Goggins, Sean
2015-01-01
Assessment in Computer Supported Collaborative Learning (CSCL) is an implicit issue, and most assessments are summative in nature. Process-oriented methods of assessment can vary significantly in their indicators and typically only partially address the complexity of group learning. Moreover, the majority of these assessment methods require…
Third Summative Report of the Delaware PLATO Project.
ERIC Educational Resources Information Center
Hofstetter, Fred T.
Descriptions of new developments in the areas of facilities, applications, user services, support staff, research, evaluation, and courseware production since the Second Summative Report (1977) are provided, as well as a summative overview of PLATO applications at the University of Delaware. Through the purchase of its own PLATO system, this…
Khan, Asaduzzaman; Chien, Chi-Wen; Bagraith, Karl S
2015-04-01
To investigate whether using a parametric statistic in comparing groups leads to different conclusions when using summative scores from rating scales compared with using their corresponding Rasch-based measures. A Monte Carlo simulation study was designed to examine between-group differences in the change scores derived from summative scores from rating scales, and those derived from their corresponding Rasch-based measures, using 1-way analysis of variance. The degree of inconsistency between the 2 scoring approaches (i.e. summative and Rasch-based) was examined, using varying sample sizes, scale difficulties and person ability conditions. This simulation study revealed scaling artefacts that could arise from using summative scores rather than Rasch-based measures for determining the changes between groups. The group differences in the change scores were statistically significant for summative scores under all test conditions and sample size scenarios. However, none of the group differences in the change scores were significant when using the corresponding Rasch-based measures. This study raises questions about the validity of the inference on group differences of summative score changes in parametric analyses. Moreover, it provides a rationale for the use of Rasch-based measures, which can allow valid parametric analyses of rating scale data.
Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.
Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James
2016-03-21
Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ben-Jacob, Marion G.; Ben-Jacob, Tyler E.
2014-01-01
This paper explores alternative assessment methods from the perspective of categorizations. It addresses the technologies that support assessment. It discusses initial, formative, and summative assessment, as well as objective and subjective assessment, and formal and informal assessment. It approaches each category of assessment from the…
ERIC Educational Resources Information Center
Puddy, Richard W.; Boles, Richard E.; Dreyer, Meredith L.; Maikranz, Julie; Roberts, Michael C.; Vernberg, Eric M.
2008-01-01
We illustrate the use of formative and summative assessment in evaluating a therapeutic classroom program for children with serious emotional disturbances. Information was analyzed based on data gathered for clinical decision-making during treatment (formative assessment) and measurement of outcomes at discharge (summative assessment) from a…
The Use of Teacher Judgement for Summative Assessment in the USA
ERIC Educational Resources Information Center
Brookhart, Susan M.
2013-01-01
Studies of the use of teacher judgement for summative assessment in the USA are considered in two general categories. (1) Studies of teacher classroom summative assessment, that is, teacher grading practices, have historically and currently emphasised the lack of validity and reliability of these judgements. (2) Studies of how teacher judgement…
Validity in Teachers' Summative Assessments
ERIC Educational Resources Information Center
Black, Paul; Harrison, Christine; Hodgen, Jeremy; Marshall, Bethan; Serret, Natasha
2010-01-01
This paper describes some of the findings of a project which set out to explore and develop teachers' understanding and practices in their summative assessments. The focus was on those summative assessments that are used on a regular basis within schools for guiding the progress of pupils and for internal accountability. The project combined both…
Summative Evaluation on the Hospital Wards. What Do Faculty Say to Learners?
ERIC Educational Resources Information Center
Hasley, Peggy B.; Arnold, Robert M.
2009-01-01
No previous studies have described how faculty give summative evaluations to learners on the medical wards. The aim of this study was to describe summative evaluations on the medical wards. Participants were students, house staff and faculty at the University of Pittsburgh. Ward rotation evaluative sessions were tape recorded. Feedback was…
10 CFR 20.1202 - Compliance with requirements for summation of external and internal doses.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false Compliance with requirements for summation of external and internal doses. 20.1202 Section 20.1202 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Occupational Dose Limits § 20.1202 Compliance with requirements for summation of...
10 CFR 20.1202 - Compliance with requirements for summation of external and internal doses.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 1 2014-01-01 2014-01-01 false Compliance with requirements for summation of external and internal doses. 20.1202 Section 20.1202 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Occupational Dose Limits § 20.1202 Compliance with requirements for summation of...
10 CFR 20.1202 - Compliance with requirements for summation of external and internal doses.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 1 2013-01-01 2013-01-01 false Compliance with requirements for summation of external and internal doses. 20.1202 Section 20.1202 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Occupational Dose Limits § 20.1202 Compliance with requirements for summation of...
10 CFR 20.1202 - Compliance with requirements for summation of external and internal doses.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 1 2012-01-01 2012-01-01 false Compliance with requirements for summation of external and internal doses. 20.1202 Section 20.1202 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Occupational Dose Limits § 20.1202 Compliance with requirements for summation of...
10 CFR 20.1202 - Compliance with requirements for summation of external and internal doses.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Compliance with requirements for summation of external and internal doses. 20.1202 Section 20.1202 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION Occupational Dose Limits § 20.1202 Compliance with requirements for summation of...
Summation in Autoshaping with Compounds Formed by the Rapid Alternation of Elements
ERIC Educational Resources Information Center
Gomez-Sancho, Luis E.; Fernandez-Serra, Francisco; Arias, M. Francisca
2013-01-01
Summation is the usual result in composition procedures with excitatory stimuli. However, summation is difficult to obtain in autoshaping with pigeons. The problems with this preparation have been related to the stimuli used: combinations of intramodal conditioned stimuli (CSs). During the perceptual processing of this type of stimuli, some mutual…
First-principles simulations of electrostatic interactions between dust grains
NASA Astrophysics Data System (ADS)
Itou, H.; Amano, T.; Hoshino, M.
2014-12-01
We investigated the electrostatic interaction between two identical dust grains of an infinite mass immersed in homogeneous plasma by employing first-principles N-body simulations combined with the Ewald method. We specifically tested the possibility of an attractive force due to overlapping Debye spheres (ODSs), as was suggested by Resendes et al. [Phys. Lett. A 239, 181-186 (1998)]. Our simulation results demonstrate that the electrostatic interaction is repulsive and even stronger than the standard Yukawa potential. We showed that the measured electric field acting on the grain is highly consistent with a model electrostatic potential around a single isolated grain that takes into account a correction due to the orbital motion limited theory. Our result is qualitatively consistent with the counterargument suggested by Markes and Williams [Phys. Lett. A 278, 152-158 (2000)], indicating the absence of the ODS attractive force.
Sampling the energy landscape of Pt13 with metadynamics
NASA Astrophysics Data System (ADS)
Pavan, Luca; Di Paola, Cono; Baletto, Francesca
2013-02-01
The potential energy surface of a metallic nanoparticle formed by 13 atoms of platinum is efficiently explored using metadynamics in combination with empirical potential molecular dynamics. The scenario obtained is wider and more complex of what was previously reported: more than thirty independent energy basins are found, corresponding to different local minima of Pt. It is demonstrated that in almost all the cases these motifs are local minima even at ab-initio level, hence proving the effectiveness of the method to sample the energy landscape. A classification of the minima in structural families is proposed, and a discussion regarding the shape and the connections between energy basins is reported. ISSPIC 16 - 16th International Symposium on Small Particles and Inorganic Clusters, edited by Kristiaan Temst, Margriet J. Van Bael, Ewald Janssens, H.-G. Boyen and Françoise Remacle.
ERIC Educational Resources Information Center
Ali, Usama S.; Walker, Michael E.
2014-01-01
Two methods are currently in use at Educational Testing Service (ETS) for equating observed item difficulty statistics. The first method involves the linear equating of item statistics in an observed sample to reference statistics on the same items. The second method, or the item response curve (IRC) method, involves the summation of conditional…
Neutron camera employing row and column summations
Clonts, Lloyd G.; Diawara, Yacouba; Donahue, Jr, Cornelius; Montcalm, Christopher A.; Riedel, Richard A.; Visscher, Theodore
2016-06-14
For each photomultiplier tube in an Anger camera, an R.times.S array of preamplifiers is provided to detect electrons generated within the photomultiplier tube. The outputs of the preamplifiers are digitized to measure the magnitude of the signals from each preamplifier. For each photomultiplier tube, a corresponding summation circuitry including R row summation circuits and S column summation circuits numerically add the magnitudes of the signals from preamplifiers for each row and for each column to generate histograms. For a P.times.Q array of photomultiplier tubes, P.times.Q summation circuitries generate P.times.Q row histograms including R entries and P.times.Q column histograms including S entries. The total set of histograms include P.times.Q.times.(R+S) entries, which can be analyzed by a position calculation circuit to determine the locations of events (detection of a neutron).
Altered quantitative sensory testing outcome in subjects with opioid therapy.
Chen, Lucy; Malarick, Charlene; Seefeld, Lindsey; Wang, Shuxing; Houghton, Mary; Mao, Jianren
2009-05-01
Preclinical studies have suggested that opioid exposure may induce a paradoxical decrease in the nociceptive threshold, commonly referred as opioid-induced hyperalgesia (OIH). While OIH may have implications in acute and chronic pain management, its clinical features remain unclear. Using an office-based quantitative sensory testing (QST) method, we compared pain threshold, pain tolerance, and the degree of temporal summation of the second pain in response to thermal stimulation among three groups of subjects: those with neither pain nor opioid therapy (group 1), with chronic pain but without opioid therapy (group 2), and with both chronic pain and opioid therapy (group 3). We also examined the possible correlation between QST responses to thermal stimulation and opioid dose, opioid treatment duration, opioid analgesic type, pain duration, or gender in group 3 subjects. As compared with both group 1 (n=41) and group 2 (n=41) subjects, group 3 subjects (n=58) displayed a decreased heat pain threshold and exacerbated temporal summation of the second pain to thermal stimulation. In contrast, there were no differences in cold or warm sensation among three groups. Among clinical factors, daily opioid dose consistently correlated with the decreased heat pain threshold and exacerbated temporal summation of the second pain in group 3 subjects. These results indicate that decreased heat pain threshold and exacerbated temporal summation of the second pain may be characteristic QST changes in subjects with opioid therapy. The data suggest that QST may be a useful tool in the clinical assessment of OIH.
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Walsh, Jason L; Harris, Benjamin H L; Denny, Paul; Smith, Phil
2018-01-01
Purpose of the study There are few studies on the value of authoring questions as a study method, the quality of the questions produced by students and student perceptions of student-authored question banks. Here we evaluate PeerWise, a widely used and free online resource that allows students to author, answer and discuss multiple-choice questions. Study design We introduced two undergraduate medical student cohorts to PeerWise (n=603). We looked at their patterns of PeerWise usage; identified associations between student engagement and summative exam performance; and used focus groups to assess student perceptions of the value of PeerWise for learning. We undertook item analysis to assess question difficulty and quality. Results Over two academic years, the two cohorts wrote 4671 questions, answered questions 606 658 times and posted 7735 comments. Question writing frequency correlated most strongly with summative performance (Spearman’s rank: 0.24, p=<0.001). Student focus groups found that: (1) students valued curriculum specificity; and (2) students were concerned about student-authored question quality. Only two questions of the 300 ’most-answered' questions analysed had an unacceptable discriminatory value (point-biserial correlation <0.2). Conclusions Item analysis suggested acceptable question quality despite student concerns. Quantitative and qualitative methods indicated that PeerWise is a valuable study tool. PMID:28866607
Near-peer education: a novel teaching program.
de Menezes, Sara; Premnath, Daphne
2016-05-30
This study aims to: 1) Evaluate whether a near-peer program improves perceived OSCE performance; 2) Identify factors motivating students to teach; 3) Evaluate role of near-peer teaching in medical education. A near-peer OSCE teaching program was implemented at Monash University's Peninsula Clinical School over the 2013 academic year. Forty 3rd-year and thirty final-year medical students were recruited as near-peer learners and educators, respectively. A post-program questionnaire was completed by learners prior to summative OSCEs (n=31), followed by post-OSCE focus groups (n=10). Near-peer teachers were interviewed at the program's conclusion (n=10). Qualitative data was analysed for emerging themes to assess the perceived value of the program. Learners felt peer-led teaching was more relevant to assessment, at an appropriate level of difficulty and delivered in a less threatening environment than other methods of teaching. They valued consistent practice and felt confident approaching their summative OSCEs. Educators enjoyed the opportunity to develop their teaching skills, citing mutual benefit and gratitude to past peer-educators as strong motivators to teach others. Near-peer education, valued by near-peer learners and teachers alike, was a useful method to improve preparation and perceived performance in summative examinations. In particular, a novel year-long, student-run initiative was regarded as a valuable and feasible adjunct to faculty teaching.
Don't Tell It Like It Is: Preserving Collegiality in the Summative Peer Review of Teaching
ERIC Educational Resources Information Center
Iqbal, Isabeau A.
2014-01-01
While much literature has considered feedback and professional growth in formative peer reviews of teaching, there has been little empirical research conducted on these issues in the context of summative peer reviews. This article explores faculty members' perceptions of feedback practices in the summative peer review of teaching and reports on…
Group Peer Assessment for Summative Evaluation in a Graduate-Level Statistics Course for Ecologists
ERIC Educational Resources Information Center
ArchMiller, Althea; Fieberg, John; Walker, J.D.; Holm, Noah
2017-01-01
Peer assessment is often used for formative learning, but few studies have examined the validity of group-based peer assessment for the summative evaluation of course assignments. The present study contributes to the literature by using online technology (the course management system Moodle™) to implement structured, summative peer review based on…
Barriers to the Uptake and Use of Feedback in the Context of Summative Assessment
ERIC Educational Resources Information Center
Harrison, Christopher J.; Könings, Karen D.; Schuwirth, Lambert; Wass, Valerie; van der Vleuten, Cees
2015-01-01
Despite calls for feedback to be incorporated in all assessments, a dichotomy exists between formative and summative assessments. When feedback is provided in a summative context, it is not always used effectively by learners. In this study we explored the reasons for this. We conducted individual interviews with 17 students who had recently…
Efficient calculation of the polarizability: a simplified effective-energy technique
NASA Astrophysics Data System (ADS)
Berger, J. A.; Reining, L.; Sottile, F.
2012-09-01
In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.
Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks
NASA Astrophysics Data System (ADS)
Roth, Holger; Oda, Masahiro; Shimizu, Natsuki; Oda, Hirohisa; Hayashi, Yuichiro; Kitasaka, Takayuki; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku
2018-03-01
Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 +/- 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.
Bridge, P D; Gallagher, R E; Berry-Bobovski, L C
2000-01-01
Fundamental to the development of educational programs and curricula is the evaluation of processes and outcomes. Unfortunately, many otherwise well-designed programs do not incorporate stringent evaluation methods and are limited in measuring program development and effectiveness. Using an advertising lesson in a school-based tobacco-use prevention curriculum as a case study, the authors examine the role of evaluation in the development, implementation, and enhancement of the curricular lesson. A four-phase formative and summative evaluation design was developed to divide the program-evaluation continuum into a structured process that would aid in the management of the evaluation, as well as assess curricular components. Formative and summative evaluation can provide important guidance in the development, implementation, and enhancement of educational curricula. Evaluation strategies identified unexpected barriers and allowed the project team to make necessary "time-relevant" curricular adjustments during each stage of the process.
Mitra, Nilesh Kumar; Barua, Ankur
2015-03-03
The impact of web-based formative assessment practices on performance of undergraduate medical students in summative assessments is not widely studied. This study was conducted among third-year undergraduate medical students of a designated university in Malaysia to compare the effect, on performance in summative assessment, of repeated computer-based formative assessment with automated feedback with that of single paper-based formative assessment with face-to face feedback. This quasi-randomized trial was conducted among two groups of undergraduate medical students who were selected by stratified random technique from a cohort undertaking the Musculoskeletal module. The control group C (n = 102) was subjected to a paper-based formative MCQ test. The experimental group E (n = 65) was provided three online formative MCQ tests with automated feedback. The summative MCQ test scores for both these groups were collected after the completion of the module. In this study, no significant difference was observed between the mean summative scores of the two groups. However, Band 1 students from group E with higher entry qualification showed higher mean score in the summative assessment. A trivial, but significant and positive correlation (r(2) = +0.328) was observed between the online formative test scores and summative assessment scores of group E. The proportionate increase of performance in group E was found to be almost double than group C. The use of computer based formative test with automated feedback improved the performance of the students with better academic background in the summative assessment. Computer-based formative test can be explored as an optional addition to the curriculum of pre-clinical integrated medical program to improve the performance of the students with higher academic ability.
NASA Astrophysics Data System (ADS)
Kruglova, T. V.
2004-01-01
The detailed spectroscope information about highly excited molecules and radicals such us as H+3, H2, HI, H2O, CH2 is needed for a number of applications in the field of laser physics, astrophysics and chemistry. Studies of highly excited molecular vibration-rotation states face several problems connected with slowly convergence or even divergences of perturbation expansions. The physical reason for a perturbation expansion divergence is the large amplitude motion and strong vibration-rotation coupling. In this case one needs to use the special method of series summation. There were a number of papers devoted to this problem: papers 1-10 in the reference list are only example of studies on this topic. The present report is aimed at the application of GET method (Generalized Euler Transformation) to the diatomic molecule. Energy levels of a diatomic molecule is usually represented as Dunham series on rotational J(J+1) and vibrational (V+1/2) quantum numbers (within the perturbation approach). However, perturbation theory is not applicable for highly excited vibration-rotation states because the perturbation expansion in this case becomes divergent. As a consequence one need to use special method for the series summation. The Generalized Euler Transformation (GET) is known to be efficient method for summing of slowly convergent series, it was already used for solving of several quantum problems Refs.13 and 14. In this report the results of Euler transformation of diatomic molecule Dunham series are presented. It is shown that Dunham power series can be represented of functional series that is equivalent to its partial summation. It is also shown that transformed series has the butter convergent properties, than the initial series.
NASA Astrophysics Data System (ADS)
Kim, Yong Sup; Rathie, Arjun K.
2008-02-01
In a recent paper, Miller (2005 J. Phys. A: Math. Gen. 38 3541-5) obtained a new summation formula for the Clausen's series 3F2(1). The aim of this comment is to point out that the summation formula obtained by Miller is not a new one.
ERIC Educational Resources Information Center
Harlen, Wynne
2005-01-01
This paper summarizes the findings of a systematic review of research on the reliability and validity of teachers' assessment used for summative purposes. In addition to the main question, the review also addressed the question "What conditions affect the reliability and validity of teachers' summative assessment?" The initial search for studies…
Mahmoodabadi, M. J.; Taherkhorsandi, M.; Bagheri, A.
2014-01-01
An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot. PMID:24616619
Summation and subtraction using a modified autoshaping procedure in pigeons.
Ploog, Bertram O
2008-06-01
A modified autoshaping paradigm (significantly different from those previously reported in the summation literature) was employed to allow for the simultaneous assessment of stimulus summation and subtraction in pigeons. The response requirements and the probability of food delivery were adjusted such that towards the end of training 12 of 48 trials ended in food delivery, the same proportion as under testing. Stimuli (outlines of squares of three sizes and colors: A, B, and C) were used that could be presented separately or in any combination of two or three stimuli. Twelve of the pigeons (summation groups) were trained with either A, B, and C or with AB, BC, and CA, and tested with ABC. The remaining 12 pigeons (subtraction groups) received training with ABC but were tested with A, B, and C or with AB, BC, and CA. These groups were further subdivided according to whether stimulus elements were presented either in a concentric or dispersed manner. Summation did not occur; subtraction occurred in the two concentric groups. For interpretation of the results, configural theory, the Rescorla-Wagner model, and the composite-stimulus control model were considered. The results suggest different mechanisms responsible for summation and subtraction.
GRAVIDY, a GPU modular, parallel direct-summation N-body integrator: dynamics with softening
NASA Astrophysics Data System (ADS)
Maureira-Fredes, Cristián; Amaro-Seoane, Pau
2018-01-01
A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The direct-summation of N gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new graphics processing unit (GPU), direct-summation N-body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GRAVIDY has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GRAVIDY can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version. A test run using four GPUs in parallel shows a speed-up factor of about 3 as compared to the single-GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.
Rees, Charlotte; Sheard, Charlotte; McPherson, Amy
2002-09-01
Despite the wealth of literature surrounding communication curricula within medical education, there is a lack of in-depth research into medical students' perceptions of communication skills assessment. This study aims to address this gap in the research literature. Five focus group discussions were conducted with 32 students, with representatives from each of the 5 years of the medical degree course at Nottingham University. Audiotapes of the discussions were transcribed in full and the transcripts were theme analysed independently by 2 analysts. Two assessment-related themes emerged from the analysis: namely, students' perceptions of formative assessment and students' perceptions of summative assessment. While students seemed to value formative methods of assessing their communication skills, they did not appear to value summative methods like objective structured clinical examinations (OSCEs). Students had mixed views about who should assess their oral communication skills. Some students preferred self-assessment while others preferred peer assessment. Although students appeared to value medical educators assessing their communication skills, other students preferred feedback from patients. Although summative methods like OSCEs were criticized widely, students suggested that examinations were essential to motivate students' learning of communication skills. This study begins to illustrate medical students' perceptions of communication skills assessment. However, further research using large-scale surveys is required to validate these findings. Medical educators should provide students with feedback on their communication skills wherever possible. This feedback should ideally come from a combination of different assessors. Over-assessment in other subject areas should be minimized to prevent students being discouraged from learning communication skills.
McNulty, John A; Espiritu, Baltazar R; Hoyt, Amy E; Ensminger, David C; Chandrasekhar, Arcot J
2015-01-01
Formative practice quizzes have become common resources for self-evaluation and focused reviews of course content in the medical curriculum. We conducted two separate studies to (1) compare the effects of a single or multiple voluntary practice quizzes on subsequent summative examinations and (2) examine when students are most likely to use practice quizzes relative to the summative examinations. In the first study, providing a single on-line practice quiz followed by instructor feedback had no effect on examination average grades compared to the previous year or student performances on similar questions. However, there were significant correlations between student performance on each practice quiz and each summative examination (r = 0.42 and r = 0.24). When students were provided multiple practice quizzes with feedback (second study), there were weak correlations between the frequency of use and performance on each summative examination (r = 0.17 and r = 0.07). The frequency with which students accessed the practice quizzes was greatest the day before each examination. In both studies, there was a decline in the level of student utilization of practice quizzes over time. We conclude that practice quizzes provide some predictive value for performances on summative examinations. Second, making practice quizzes available for longer periods prior to summative examinations does not promote the use of the quizzes as a study strategy because students appear to use them mostly to assess knowledge one to two days prior to examinations. © 2014 American Association of Anatomists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for coarse grained models of electrolyte solution. Here, we provide rigorous definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation (DFT-MD) and isolate the effects of charge and cavitation,more » comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to highly unphysical values for the solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry (CHA) for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation. We would like to thank Thomas Beck, Shawn Kathmann, Richard Remsing and John Weeks for helpful discussions. Computing resources were generously allocated by PNNL's Institutional Computing program. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. TTD, GKS, and CJM were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated by Battelle for the U.S. Department of Energy.« less
Fast, adaptive summation of point forces in the two-dimensional Poisson equation
NASA Technical Reports Server (NTRS)
Van Dommelen, Leon; Rundensteiner, Elke A.
1989-01-01
A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.
Sukumar, Subash; Waugh, Sarah J
2007-03-01
We estimated spatial summation areas for the detection of luminance-modulated (LM) and contrast-modulated (CM) blobs at the fovea, 2.5, 5 and 10 deg eccentrically. Gaussian profiles were added or multiplied to binary white noise to create LM and CM blob stimuli and these were used to psychophysically estimate detection thresholds and spatial summation areas. The results reveal significantly larger summation areas for detecting CM than LM blobs across eccentricity. These differences are comparable to receptive field size estimates made in V1 and V2. They support the notion that separate spatial processing occurs for the detection of LM and CM stimuli.
Anderson, Ursula S; Stoinski, Tara S; Bloomsmith, Mollie A; Maple, Terry L
2007-02-01
The ability to select the larger of two quantities ranging from 1 to 5 (relative numerousness judgment [RNJ[) and the ability to select the larger of two pairs of quantities with each pair ranging from 1 to 8 (summation) were evaluated in young, middle-aged, and older adult orangutans (7 Pongo pygmaeus abelii and 2 Pongo pygmaeus pygmaeus). Summation accuracy and RNJ were similar to those of previous reports in apes; however, the pattern of age-related differences with regard to these tasks was different from that previously reported in gorillas. Older orangutans were less accurate than the young and middle-aged for RNJ, and summation accuracy was equivalent among age groups. Evidence was found to suggest that the young and middle-aged based their selection of the largest quantity pair on both quantities within each pair during the summation task. These results show a relationship between subject age and the quantitative abilities of adult orangutans.
How we give personalised audio feedback after summative OSCEs.
Harrison, Christopher J; Molyneux, Adrian J; Blackwell, Sara; Wass, Valerie J
2015-04-01
Students often receive little feedback after summative objective structured clinical examinations (OSCEs) to enable them to improve their performance. Electronic audio feedback has shown promise in other educational areas. We investigated the feasibility of electronic audio feedback in OSCEs. An electronic OSCE system was designed, comprising (1) an application for iPads allowing examiners to mark in the key consultation skill domains, provide "tick-box" feedback identifying strengths and difficulties, and record voice feedback; (2) a feedback website giving students the opportunity to view/listen in multiple ways to the feedback. Acceptability of the audio feedback was investigated, using focus groups with students and questionnaires with both examiners and students. 87 (95%) students accessed the examiners' audio comments; 83 (90%) found the comments useful and 63 (68%) reported changing the way they perform a skill as a result of the audio feedback. They valued its highly personalised, relevant nature and found it much more useful than written feedback. Eighty-nine per cent of examiners gave audio feedback to all students on their stations. Although many found the method easy, lack of time was a factor. Electronic audio feedback provides timely, personalised feedback to students after a summative OSCE provided enough time is allocated to the process.
Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*
Hardy, David J.; Stone, John E.; Schulten, Klaus
2009-01-01
Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132
Kiley, Kasey B.; Haywood, Carlton; Bediako, Shawn M.; Lanzkron, Sophie; Carroll, C. Patrick; Buenaver, Luis F.; Pejsa, Megan; Edwards, Robert R.; Haythornthwaite, Jennifer A.; Campbell, Claudia M.
2016-01-01
Objective: People living with sickle cell disease (SCD) experience severe episodic and chronic pain and frequently report poor interpersonal treatment within health-care settings. In this particularly relevant context, we examined the relationship between perceived discrimination and both clinical and laboratory pain. Methods: Seventy-one individuals with SCD provided self-reports of experiences with discrimination in health-care settings and clinical pain severity, and completed a psychophysical pain testing battery in the laboratory. Results: Discrimination in health-care settings was correlated with greater clinical pain severity and enhanced sensitivity to multiple laboratory-induced pain measures, as well as stress, depression, and sleep. After controlling for relevant covariates, discrimination remained a significant predictor of mechanical temporal summation (a marker of central pain facilitation), but not clinical pain severity or suprathreshold heat pain response. Furthermore, a significant interaction between experience with discrimination and clinical pain severity was associated with mechanical temporal summation; increased experience with discrimination was associated with an increased correlation between clinical pain severity and temporal summation of pain. Discussion: Perceived discrimination within health-care settings was associated with pain facilitation. These findings suggest that discrimination may be related to increased central sensitization among SCD patients, and more broadly that health-care social environments may interact with pain pathophysiology. PMID:26889615
Can formative quizzes predict or improve summative exam performance?*
Zhang, Niu; Henderson, Charles N.R.
2015-01-01
Objective Despite wide use, the value of formative exams remains unclear. We evaluated the possible benefits of formative assessments in a physical examination course at our chiropractic college. Methods Three hypotheses were examined: (1) Receiving formative quizzes (FQs) will increase summative exam (SX) scores, (2) writing FQ questions will further increase SE scores, and (3) FQs can predict SX scores. Hypotheses were tested across three separate iterations of the class. Results The SX scores for the control group (Class 3) were significantly less than those of Classes 1 and 2, but writing quiz questions and taking FQs (Class 1) did not produce significantly higher SX scores than only taking FQs (Class 2). The FQ scores were significant predictors of SX scores, accounting for 52% of the SX score. Sex, age, academic degrees, and ethnicity were not significant copredictors. Conclusion Our results support the assertion that FQs can improve written SX performance, but students producing quiz questions didn't further increase SX scores. We concluded that nonthreatening FQs may be used to enhance student learning and suggest that they also may serve to identify students who, without additional remediation, will perform poorly on subsequent summative written exams. PMID:25517737
ERIC Educational Resources Information Center
Lookadoo, Kathryn L.; Bostwick, Eryn N.; Ralston, Ryan; Elizondo, Francisco Javier; Wilson, Scott; Shaw, Tarren J.; Jensen, Matthew L.
2017-01-01
This study examined the role of formative and summative assessment in instructional video games on student learning and engagement. A 2 (formative feedback: present vs absent) × 2 (summative feedback: present vs absent) factorial design with an offset control (recorded lecture) was conducted to explore the impacts of assessment in video games. A…
Multi-party quantum summation without a trusted third party based on single particles
NASA Astrophysics Data System (ADS)
Zhang, Cai; Situ, Haozhen; Huang, Qiong; Yang, Pingle
We propose multi-party quantum summation protocols based on single particles, in which participants are allowed to compute the summation of their inputs without the help of a trusted third party and preserve the privacy of their inputs. Only one participant who generates the source particles needs to perform unitary operations and only single particles are needed in the beginning of the protocols.
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin
2009-10-01
Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
Schmidtmann, Gunnar; Jennings, Ben J; Bell, Jason; Kingdom, Frederick A A
2015-01-01
Previous studies investigating signal integration in circular Glass patterns have concluded that the information in these patterns is linearly summed across the entire display for detection. Here we test whether an alternative form of summation, probability summation (PS), modeled under the assumptions of Signal Detection Theory (SDT), can be rejected as a model of Glass pattern detection. PS under SDT alone predicts that the exponent β of the Quick- (or Weibull-) fitted psychometric function should decrease with increasing signal area. We measured spatial integration in circular, radial, spiral, and parallel Glass patterns, as well as comparable patterns composed of Gabors instead of dot pairs. We measured the signal-to-noise ratio required for detection as a function of the size of the area containing signal, with the remaining area containing dot-pair or Gabor-orientation noise. Contrary to some previous studies, we found that the strength of summation never reached values close to linear summation for any stimuli. More importantly, the exponent β systematically decreased with signal area, as predicted by PS under SDT. We applied a model for PS under SDT and found that it gave a good account of the data. We conclude that probability summation is the most likely basis for the detection of circular, radial, spiral, and parallel orientation-defined textures.
Evaluation of the Michigan Public School Academy Initiative: Final Report [and] Executive Summary.
ERIC Educational Resources Information Center
Horn, Jerry; Miron, Gary
This is the final report of a one-year evaluation of the Michigan Public School Academy (PSA) initiative. The evaluation involved both formative and summative evaluations and used both qualitative and quantitative methods. The study was conducted between October 1997 and December 1998. Data-collection methods included a charter-school survey and a…
Program Evaluation of a Competency-Based Online Model in Higher Education
ERIC Educational Resources Information Center
DiGiacomo, Karen
2017-01-01
In order to serve its nontraditional students, a university piloted a competency-based program as alternative method for its students to earn college credit. The purpose of this mixed-methods study was to conduct a summative program evaluation to determine if the program was successful in order to make decisions about program revision and…
Rhudy, Jamie L; Martin, Satin L; Terry, Ellen L; Delventura, Jennifer L; Kerr, Kara L; Palit, Shreela
2012-11-01
Emotion can modulate pain and spinal nociception, and correlational data suggest that cognitive-emotional processes can facilitate wind-up-like phenomena (ie, temporal summation of pain). However, there have been no experimental studies that manipulated emotion to determine whether within-subject changes in emotion influence temporal summation of pain (TS-pain) and the nociceptive flexion reflex (TS-NFR, a physiological measure of spinal nociception). The present study presented a series of emotionally charged pictures (mutilation, neutral, erotic) during which electric stimuli at 2 Hz were delivered to the sural nerve to evoke TS-pain and TS-NFR. Participants (n=46 healthy; 32 female) were asked to rate their emotional reactions to pictures as a manipulation check. Pain outcomes were analyzed using statistically powerful multilevel growth curve models. Results indicated that emotional state was effectively manipulated. Further, emotion modulated the overall level of pain and NFR; pain and NFR were highest during mutilation and lowest during erotic pictures. Although pain and NFR both summated in response to the 2-Hz stimulation series, the magnitude of pain summation (TS-pain) and NFR summation (TS-NFR) was not modulated by picture-viewing. These results imply that, at least in healthy humans, within-subject changes in emotions do not promote central sensitization via amplification of temporal summation. However, future studies are needed to determine whether these findings generalize to clinical populations (eg, chronic pain). Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Anderson, Henry A; Imm, Pamela; Knobeloch, Lynda; Turyk, Mary; Mathew, John; Buelow, Carol; Persky, Victoria
2008-09-01
Polybrominated diphenyl ethers (PBDEs) have been used as flame retardants in foams, fabrics and plastics, and are common contaminants of household air and dust and bioaccumulate in wildlife, and are detectable in human tissues and in fish and animal food products. In the Great Lakes Basin sport fish consumption has been demonstrated to be an important source of PCB and DDE exposure. PBDEs are present in the same sport fish but prior to our study the contribution to human PBDE body burdens from Great Lakes sport fish consumption had not been investigated. This study was designed to assess PBDE, PCB and 1,1-bis(4-chlorophenyl)-2,2-dichloroethene (DDE) serum concentrations in an existing cohort of 508 frequent and infrequent consumers of sport-caught fish living in five Great Lake states. BDE congeners 47 and 99 were identified in the majority of blood samples, 98% and 62% respectively. summation operatorPBDE levels were positively associated with age, hours spent outdoors, DDE, summation operatorPCB, years of sportfish consumption, and catfish and shellfish intake, and negatively associated with income and recent weight loss. Other dietary components collected were not predictive of measured summation operatorPBDE levels. In multivariate models, summation operatorPBDE levels were positively associated with age, years consuming sport fish, shellfish meals, and computer use and negatively associated with recent weight loss. Having summation operatorPBDE levels in the highest quintile was independently associated with older age, male gender, consumption of catfish and shellfish, computer use and spending less time indoors. summation operatorPCB and DDE were strongly associated suggesting common exposure routes. The association between summation operatorPBDE and summation operatorPCB or DDE was much weaker and modeling suggested more diverse PBDE sources with few identified multi-contaminant-shared exposure routes. In our cohort Great Lakes sport fish consumption does not contribute strongly to PBDE exposure.
Rohrbach, Helene; Korpivaara, Toni; Schatzmann, Urs; Spadavecchia, Claudia
2009-07-01
To evaluate and compare the antinociceptive effects of the three alpha-2 agonists, detomidine, romifidine and xylazine at doses considered equipotent for sedation, using the nociceptive withdrawal reflex (NWR) and temporal summation model in standing horses. Prospective, blinded, randomized cross-over study. Ten healthy adult horses weighing 527-645 kg and aged 11-21 years old. Electrical stimulation was applied to the digital nerves to evoke NWR and temporal summation in the left thoracic limb and pelvic limb of each horse. Electromyographic reflex activity was recorded from the common digital extensor and the cranial tibial muscles. After baseline measurements a single bolus dose of detomidine, 0.02 mg kg(-1), romifidine 0.08 mg kg(-1), or xylazine, 1 mg kg(-1), was administered intravenously (IV). Determinations of NWR and temporal summation thresholds were repeated at 10, 20, 30, 40, 60, 70, 90, 100, 120 and 130 minutes after test-drug administration alternating the thoracic limb and the pelvic limb. Depth of sedation was assessed before measurements at each time point. Behavioural reaction was observed and recorded following each stimulation. The administration of detomidine, romifidine and xylazine significantly increased the current intensities necessary to evoke NWR and temporal summation in thoracic limbs and pelvic limbs of all horses compared with baseline. Xylazine increased NWR thresholds over baseline values for 60 minutes, while detomidine and romifidine increased NWR thresholds over baseline for 100 and 120 minutes, respectively. Temporal summation thresholds were significantly increased for 40, 70 and 130 minutes after xylazine, detomidine and romifidine, respectively. Detomidine, romifidine and xylazine, administered IV at doses considered equipotent for sedation, significantly increased NWR and temporal summation thresholds, used as a measure of antinociceptive activity. The extent of maximal increase of NWR and temporal summation thresholds was comparable, while the duration of action was drug-specific.
Walsh, Jason L; Harris, Benjamin H L; Denny, Paul; Smith, Phil
2018-02-01
There are few studies on the value of authoring questions as a study method, the quality of the questions produced by students and student perceptions of student-authored question banks. Here we evaluate PeerWise, a widely used and free online resource that allows students to author, answer and discuss multiple-choice questions. We introduced two undergraduate medical student cohorts to PeerWise (n=603). We looked at their patterns of PeerWise usage; identified associations between student engagement and summative exam performance; and used focus groups to assess student perceptions of the value of PeerWise for learning. We undertook item analysis to assess question difficulty and quality. Over two academic years, the two cohorts wrote 4671 questions, answered questions 606 658 times and posted 7735 comments. Question writing frequency correlated most strongly with summative performance (Spearman's rank: 0.24, p=<0.001). Student focus groups found that: (1) students valued curriculum specificity; and (2) students were concerned about student-authored question quality. Only two questions of the 300 'most-answered' questions analysed had an unacceptable discriminatory value (point-biserial correlation <0.2). Item analysis suggested acceptable question quality despite student concerns. Quantitative and qualitative methods indicated that PeerWise is a valuable study tool. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
An Interactive Computer Aided Design and Analysis Package.
1986-03-01
Al-A167 114 AN INTERACTIVE COMPUTER AIDED DESIGN MUD ANAILYSIS 1/ PACKAGE(U) NAVAL POSTGRADUATE SCHOOL NONTEREY CA T L EUALD "AR 86 UNCLSSIFIED F... SCHOOL Monterey, California DTIC .LECTE MAYOS THESIS AN INTERACTIVE COMPUTER AIDED DESIGN AND ANALYSIS PACKAGE by Terrence L. Ewald March 1986 jThesis...ORGANIZATION Naval Postgraduate School (if dAp90h81111) Naval Postgraduate School . 62A 6C. ADDRESS (0ty. State, and ZIP Code) 7b. ADDRESS (City State. and
Proposal of an environmental performance index to assess solid waste treatment technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulart Coelho, Hosmanny Mauro, E-mail: hosmanny@hotmail.com; Lange, Lisete Celina; Coelho, Lineker Max Goulart
2012-07-15
Highlights: Black-Right-Pointing-Pointer Proposal of a new concept in waste management: Cleaner Treatment. Black-Right-Pointing-Pointer Development of an index to assess quantitatively waste treatment technologies. Black-Right-Pointing-Pointer Delphi Method was carried out so as to define environmental indicators. Black-Right-Pointing-Pointer Environmental performance evaluation of waste-to-energy plants. - Abstract: Although the concern with sustainable development and environment protection has considerably grown in the last years it is noted that the majority of decision making models and tools are still either excessively tied to economic aspects or geared to the production process. Moreover, existing models focus on the priority steps of solid waste management, beyond wastemore » energy recovery and disposal. So, in order to help the lack of models and tools aiming at the waste treatment and final disposal, a new concept is proposed: the Cleaner Treatment, which is based on the Cleaner Production principles. This paper focuses on the development and validation of the Cleaner Treatment Index (CTI), to assess environmental performance of waste treatment technologies based on the Cleaner Treatment concept. The index is formed by aggregation (summation or product) of several indicators that consists in operational parameters. The weights of the indicator were established by Delphi Method and Brazilian Environmental Laws. In addition, sensitivity analyses were carried out comparing both aggregation methods. Finally, index validation was carried out by applying the CTI to 10 waste-to-energy plants data. From sensitivity analysis and validation results it is possible to infer that summation model is the most suitable aggregation method. For summation method, CTI results were superior to 0.5 (in a scale from 0 to 1) for most facilities evaluated. So, this study demonstrates that CTI is a simple and robust tool to assess and compare the environmental performance of different treatment plants being an excellent quantitative tool to support Cleaner Treatment implementation.« less
NASA Astrophysics Data System (ADS)
Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang
2017-12-01
Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.
Creative Approaches to Parenting Education.
ERIC Educational Resources Information Center
DeBord, Karen; Roseboro, Jacqueline D.; Wicker, Karen M.
1998-01-01
Two North Carolina projects used methods from the National Network for Family Resiliency's Parenting Evaluation Decision Framework. Parenting for Success for Hispanic Parents used focus group interviews and summative evaluation. Individualized education for Head Start parents used pre/posttests of parental self-esteem and child development…
NASA Astrophysics Data System (ADS)
Capineri, L.; Bulletti, A.; Calzolai, M.; Giannelli, P.
2016-12-01
This paper describes the design and fabrication of a 16-element transducer array for airborne ultrasonic imaging operating at 150 kHz, that can operate both at close range (50 mm) in the near field of a synthetic aperture, and up to 250 mm. The proposed imaging technique is based on a modified version of the delay and sum algorithm implemented with a synthetic aperture where each pixel amplitude is determined by the integration of the signal obtained by the coherent summation of the acquired signals over a delayed window with fixed length. The image reconstruction methods using raw data provides the possibility to detect targets with smaller feature size on the order of one wavelength because the coherent signals summation over the selected window length while the image reconstruction methods using the summation of enveloped signals increases the amplitude response at the expenses of a lower spatial resolution. For the implementation of this system it is important to design compact airborne transducers with large field of view and this can be obtained with a new design of hemi-cylindrical polyvinylidene fluoride film transducers directly mounted on a printed circuit board. This new method is low cost and has repeatable transducer characteristics. The complete system is compact, with a modular architecture, in which eight boards with dual ultrasonic channels are mounted on a mother board. Each daughter board hosts a microcontroller unit and can operate with transducers in the bandwidth 40-200 kHz with on-board data acquisition, pre-processing and transfer on a dedicated bus.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
New method of extracting information of arterial oxygen saturation based on ∑ | 𝚫 |
NASA Astrophysics Data System (ADS)
Dai, Wenting; Lin, Ling; Li, Gang
2017-04-01
Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.
New method of extracting information of arterial oxygen saturation based on ∑|𝚫 |
NASA Astrophysics Data System (ADS)
Wenting, Dai; Ling, Lin; Gang, Li
2017-04-01
Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.
Measuring temporal summation in visual detection with a single-photon source.
Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G
2017-11-01
Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or <30ms) while the number of photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.
SINDA, Systems Improved Numerical Differencing Analyzer
NASA Technical Reports Server (NTRS)
Fink, L. C.; Pan, H. M. Y.; Ishimoto, T.
1972-01-01
Computer program has been written to analyze group of 100-node areas and then provide for summation of any number of 100-node areas to obtain temperature profile. SINDA program options offer user variety of methods for solution of thermal analog modes presented in network format.
Aligning Assessments for COSMA Accreditation
ERIC Educational Resources Information Center
Laird, Curt; Johnson, Dennis A.; Alderman, Heather
2015-01-01
Many higher education sport management programs are currently in the process of seeking accreditation from the Commission on Sport Management Accreditation (COSMA). This article provides a best-practice method for aligning student learning outcomes with a sport management program's mission and goals. Formative and summative assessment procedures…
Bergadano, Alessandra; Andersen, Ole K; Arendt-Nielsen, Lars; Spadavecchia, Claudia
2007-08-01
To investigate the facilitation of the nociceptive withdrawal reflex (NWR) by repeated electrical stimuli and the associated behavioral response scores in conscious, nonmedicated dogs as a measure of temporal summation and analyze the influence of stimulus intensity and frequency on temporal summation responses. 8 adult Beagles. Surface electromyographic responses evoked by transcutaneous constant-current electrical stimulation of ulnaris and digital plantar nerves were recorded from the deltoideus, cleidobrachialis, biceps femoris, and cranial tibial muscles. A repeated stimulus was given at 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, and 1.1 x I(t) (the individual NWR threshold intensity) at 2, 5, and 20 Hz. Threshold intensity and relative amplitude and latency of the reflex were analyzed for each stimulus configuration. Behavioral reactions were subjectively scored. Repeated sub-I(t) stimuli summated and facilitated the NWR. To elicit temporal summation, significantly lower intensities were needed for the hind limb, compared with the forelimb. Stimulus frequency did not influence temporal summation, whereas increasing intensity resulted in significantly stronger electromyographic responses and nociception (determined via behavioral response scoring) among the dogs. In dogs, it is possible to elicit nociceptive temporal summation that correlates with behavioral reactions. These data suggest that this experimental technique can be used to evaluate nociceptive system excitability and efficacy of analgesics in canids.
ERIC Educational Resources Information Center
Roberts, Laura; Mancuso, Steven V.
2014-01-01
This mixed-methods study of 84 job advertisements for international school leaders on six continents from 2006 to 2012 entailed both qualitative and quantitative research methods. Job advertisements were obtained from the most active recruiting agency for school leaders worldwide. Conventional and summative content analysis procedures were used to…
Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace
NASA Astrophysics Data System (ADS)
Hou, Z.; Chen, Y.; Tan, K.; Du, P.
2018-04-01
Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.
Talamonti, Walter; Tijerina, Louis; Blommer, Mike; Swaminathan, Radhakrishnan; Curry, Reates; Ellis, R Darin
2017-11-01
This paper describes a new method, a 'mirage scenario,' to support formative evaluation of driver alerting or warning displays for manual and automated driving. This method provides driving contexts (e.g., various Times-To-Collision (TTCs) to a lead vehicle) briefly presented and then removed. In the present study, during each mirage event, a haptic steering display was evaluated. This haptic display indicated a steering response may be initiated to drive around an obstacle ahead. A motion-base simulator was used in a 32-participant study to present vehicle motion cues similar to the actual application. Surprise was neither present nor of concern, as it would be for a summative evaluation of a forward collision warning system. Furthermore, no collision avoidance maneuvers were performed, thereby reducing the risk of simulator sickness. This paper illustrates the mirage scenario procedures, the rating methods and definitions used with the mirage scenario, and analysis of the ratings obtained, together with a multi-attribute utility theory (MAUT) approach to evaluate and select among alternative designs for future summative evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Figure-ground segregation by motion contrast and by luminance contrast.
Regan, D; Beverley, K I
1984-05-01
Some naturally camouflaged objects are invisible unless they move; their boundaries are then defined by motion contrast between object and background. We compared the visual detection of such camouflaged objects with the detection of objects whose boundaries were defined by luminance contrast. The summation field area is 0.16 deg2 , and the summation time constant is 750 msec for parafoveally viewed objects whose boundaries are defined by motion contrast; these values are, respectively, about 5 and 12 times larger than the corresponding values for objects defined by luminance contrast. The log detection threshold is proportional to the eccentricity for a camouflaged object of constant area. The effect of eccentricity on threshold is less for large objects than for small objects. The log summation field diameter for detecting camouflaged objects is roughly proportional to the eccentricity, increasing to about 20 deg at 32-deg eccentricity. In contrast to the 100:1 increase of summation area for detecting camouflaged objects, the temporal summation time constant changes by only 40% between eccentricities of 0 and 16 deg.
Open Resonator for Summation of Powers in Sub-Terahertz and Terahertz Frequencies
NASA Astrophysics Data System (ADS)
Kuz'michev, I. K.; Yeryomka, V. D.; May, A. V.; Troshchilo, A. S.
2017-03-01
Purpose: Study of excitation features for the first higher axialasymmetric type oscillations in an open resonator connected into the waveguide transmission line. Design/methodology/approach: To determine the efficiency of higher oscillation excitation in the resonator by using the highest wave of a rectangular waveguide, the coefficient of the antenna surface utilization is used. The coefficient of reflection from the open resonator is determined by the known method of summation of the partial coefficients of reflection from the resonant system. Findings: The excitation efficiency of the first higher axial asymmetric type TEM10q oscillations in an open resonator connected into the waveguide transmission line, using the TE20 type wave, is considered. The research efforts were made with accounting for the electromagnetic field vector nature. It is shown that for certain sizes of exciting coupler the excitation efficiency of the working excitation is equal to 0.867. Besides, this resonant system has a single frequency response within a wide band of frequencies. Due to this, it can be applied for summation of powers for individual sources of oscillations. Since this resonant system allows separating the matching functions as to the field and coupling, it is possible to provide any prescribed coupling of sources with a resonant volume. For this purpose, one- dimensional diffraction gratings (E-polarization) are used. Conclusions: With the matched excitation of axially asymmetric modes of oscillations the resonant system has an angular and frequency spectrum selection that is of great practical importance for powers summation. By application of one- dimensional diffraction gratings (E-polarization), located in apertures of coupling elements, the active elements can be matched with the resonant volume.
Contact heat-evoked temporal summation: tonic versus repetitive-phasic stimulation.
Granot, Michal; Granovsky, Yelena; Sprecher, Elliot; Nir, Rony-Reuven; Yarnitsky, David
2006-06-01
Temporal summation (TS) is usually evoked by repetitive mechanical or electrical stimuli, and less commonly by tonic heat pain. The present study aimed to examine the TS induction by repetitive-phasic versus tonic heat pain stimuli. Using 27 normal volunteers, we compared the extent of summation by three calculation methods: start-to-end pain rating difference, percent change, and double-logarithmic regression of successive ratings along the stimulation. Subjects were tested twice, and the reliability of each of the paradigms was obtained. In addition, personality factors related to pain catastrophizing and anxiety level were also correlated with the psychophysical results. Both paradigms induced significant TS, with similar increases for the repetitive-phasic and the tonic paradigms, as measured on 0-100 numerical pain scale (from 52.9+/-11.7 to 80.2+/-15.5, p<0.001; and from 38.5+/-13.3 to 75.8+/-18.3, p<0.001, respectively). The extent of summation was significantly correlated between the two paradigms, when calculated by absolute change (r=0.543, p=0.004) and by regression (r=0.438, p=0.025). Session-to-session variability was similar for both paradigms, relatively large, yet not biased. As with other psychophysical parameters, this poses some limitations on TS assessment in individual patients over time. The extent of TS induced by both paradigms was found to be associated with anxiety level and pain catastrophizing. Despite some dissimilarity between the repetitive-phasic and the tonic paradigms, the many similarities suggest that the two represent a similar physiological process, even if not precisely the same. Future clinical applications of these tests will determine the clinical relevance of the TS paradigms presented in this study.
Salomon-Ferrer, Romelia; Götz, Andreas W; Poole, Duncan; Le Grand, Scott; Walker, Ross C
2013-09-10
We present an implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs. First released publicly in April 2010 as part of version 11 of the AMBER MD package and further improved and optimized over the last two years, this implementation supports the three most widely used statistical mechanical ensembles (NVE, NVT, and NPT), uses particle mesh Ewald (PME) for the long-range electrostatics, and runs entirely on CUDA-enabled NVIDIA graphics processing units (GPUs), providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPU version of AMBER software running on all conventional CPU-based clusters and supercomputers. We briefly discuss three different precision models developed specifically for this work (SPDP, SPFP, and DPDP) and highlight the technical details of the approach as it extends beyond previously reported work [Götz et al., J. Chem. Theory Comput. 2012, DOI: 10.1021/ct200909j; Le Grand et al., Comp. Phys. Comm. 2013, DOI: 10.1016/j.cpc.2012.09.022].We highlight the substantial improvements in performance that are seen over traditional CPU-only machines and provide validation of our implementation and precision models. We also provide evidence supporting our decision to deprecate the previously described fully single precision (SPSP) model from the latest release of the AMBER software package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xujun; Li, Jiyuan; Jiang, Xikai
An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less
Zhao, Xujun; Li, Jiyuan; Jiang, Xikai; ...
2017-06-29
An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less
The Logic of Summative Confidence
ERIC Educational Resources Information Center
Gugiu, P. Cristian
2007-01-01
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qingcheng, E-mail: qiy9@pitt.edu; To, Albert C., E-mail: albertto@pitt.edu
Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), ) is applied to capture surface effect for nanosized structures by designing a surface summation rule SR{sup S} within the framework of MMM. Combined with previously proposed bulk summation rule SR{sup B}, the MMM summation rule SR{sup MMM} is completed. SR{sup S} and SR{sup B} are consistently formed within SR{sup MMM} for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to themore » good performance of SR{sup MMM} lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SR{sup S} and SR{sup B} are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SR{sup MMM} accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SR{sup MMM} with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SR{sup MMM} that is analogous to numerical integration error with quadrature rule in FEM is very small. - Highlights: • Surface effect captured by Multiresolution Molecular Mechanics (MMM) is presented. • A novel surface summation rule within the framework of MMM is proposed. • Surface, corner and edges effects are accuterly captured in two and three dimension. • MMM with less 0.3% degrees of freedom of atomistics reproduces atomistic results.« less
Tackling student neurophobia in neurosciences block with team-based learning
Anwar, Khurshid; Shaikh, Abdul A.; Sajid, Muhammad R.; Cahusac, Peter; Alarifi, Norah A.; Al Shedoukhy, Ahlam
2015-01-01
Introduction Traditionally, neurosciences is perceived as a difficult course in undergraduate medical education with literature suggesting use of the term “Neurophobia” (fear of neurology among medical students). Instructional strategies employed for the teaching of neurosciences in undergraduate curricula traditionally include a combination of lectures, demonstrations, practical classes, problem-based learning and clinico-pathological conferences. Recently, team-based learning (TBL), a student-centered instructional strategy, has increasingly been regarded by many undergraduate medical courses as an effective method to assist student learning. Methods In this study, 156 students of year-three neuroscience block were divided into seven male and seven female groups, comprising 11–12 students in each group. TBL was introduced during the 6 weeks of this block, and a total of eight TBL sessions were conducted during this duration. We evaluated the effect of TBL on student learning and correlated it with the student's performance in summative assessment. Moreover, the students’ perceptions regarding the process of TBL was assessed by online survey. Results We found that students who attended TBL sessions performed better in the summative examinations as compared to those who did not. Furthermore, students performed better in team activities compared to individual testing, with male students performing better with a more favorable impact on their grades in the summative examination. There was an increase in the number of students achieving higher grades (grade B and above) in this block when compared to the previous block (51.7% vs. 25%). Moreover, the number of students at risk for lower grades (Grade B- and below) decreased in this block when compared to the previous block (30.6% vs. 55%). Students generally elicited a favorable response regarding the TBL process, as well as expressed satisfaction with the content covered and felt that such activities led to improvement in communication and interpersonal skills. Conclusion We conclude that implementing TBL strategy increased students’ responsibility for their own learning and helped the students in bridging the gap in their cognitive knowledge to tackle ‘neurophobia’ in a difficult neurosciences block evidenced by their improved performance in the summative assessment. PMID:26232115
NASA Technical Reports Server (NTRS)
Sawada, H.; Sakakibara, S.; Sato, M.; Kanda, H.; Karasawa, T.
1984-01-01
A quantitative evaluation method of the suction effect from a suction plate on side walls is explained. It is found from wind tunnel tests that the wall interference is basically described by the summation form of wall interferences in the case of two dimensional flow and the interference of side walls.
ERIC Educational Resources Information Center
McLoughlin, M. Padraig M. M.
2012-01-01
P. R. Halmos recalled a conversation with R. L. Moore where Moore quoted a Chinese proverb. That proverb provides a summation of the justification of the methods employed in teaching students to do mathematics with a modified Moore method (MMM). It states, "I see, I forget; I hear, I remember; I do, I understand." In this paper we build…
Equity in Assistance? Usability of a U.S. Government Food Assistance Application
ERIC Educational Resources Information Center
Saal, Leah Katherine
2016-01-01
This article focuses on the quantitative phase of a multiphase mixed methods study investigating adults' and families' access to government food assistance. The research evaluates participants' comprehension of, and ability to, adequately complete authentic complex texts--national food assistance application documents. Summative usability testing…
Computer-Based Training for Library Staff: From Demonstration to Continuing Program.
ERIC Educational Resources Information Center
Bayne, Pauline S.
1993-01-01
Describes a demonstration project developed at the University of Tennessee (Knoxville) libraries to train nonprofessional library staff with computer-based training using HyperCard that was created by librarians rather than by computer programmers. Evaluation methods are discussed, including formative and summative evaluation; and modifications…
Assessment Intelligence in Small Group Learning
ERIC Educational Resources Information Center
Xing, Wanli; Wu, Yonghe
2014-01-01
Assessment of groups in CSCL context is a challenging task fraught with many confounding factors collected and measured. Previous documented studies are by and large summative in nature and some process-oriented methods require time-intensive coding of qualitative data. This study attempts to resolve these problems for teachers to assess groups…
Assessing Student Understanding: A Framework or Testing and Teaching
ERIC Educational Resources Information Center
Brendefur, Jonathan L.; Strother, Sam; Rich, Kelli; Appleton, Sarah
2016-01-01
Teachers use the word assessment to describe any method of gathering information about student learning. Whether it be formative assessment (intended to guide instructional decisions) or summative assessment (a reflection on the entirety of student learning from prior instruction), teachers are constantly working to identify what their students…
Using Longitudinal Scales Assessment for Instrumental Music Students
ERIC Educational Resources Information Center
Simon, Samuel H.
2014-01-01
In music education, current assessment trends emphasize student reflection, tracking progress over time, and formative as well as summative measures. This view of assessment requires instrumental music educators to modernize their approaches without interfering with methods that have proven to be successful. To this end, the Longitudinal Scales…
Assessing the Subsequent Effect of a Formative Evaluation on a Program.
ERIC Educational Resources Information Center
Brown, J. Lynne; Kiernan, Nancy Ellen
2001-01-01
Conducted a formative evaluation of an osteoporosis prevention health education program using several methods, including questionnaires completed by 256 women, and then compared formative evaluation results to those of a summative evaluation focusing on the same target group. Results show the usefulness of formative evaluation for strengthening…
Developments and Changes Resulting from Writing and Thinking Assessment
ERIC Educational Resources Information Center
Flateby, Teresa
2009-01-01
This article chronicles the evolution of a large research extensive institution's General Education writing assessment efforts from an initial summative focus to a formative, improvement focus. The methods of assessment, which changed as the assessment purpose evolved, are described. As more data were collected, the measurement tool was…
Orientation tuning of binocular summation: a comparison of colour to achromatic contrast
Gheiratmand, Mina; Cherniawsky, Avital S.; Mullen, Kathy T.
2016-01-01
A key function of the primary visual cortex is to combine the input from the two eyes into a unified binocular percept. At low, near threshold, contrasts a process of summation occurs if the visual inputs from the two eyes are similar. Here we measure the orientation tuning of binocular summation for chromatic and equivalent achromatic contrast. We derive estimates of orientation tuning by measuring binocular summation as a function of the orientation difference between two sinusoidal gratings presented dichoptically to different eyes. We then use a model to estimate the orientation bandwidth of the neural detectors underlying the binocular combination. We find that orientation bandwidths are similar for chromatic and achromatic stimuli at both low (0.375 c/deg) and mid (1.5 c/deg) spatial frequencies, with an overall average of 29 ± 3 degs (HWHH, s.e.m). This effect occurs despite the overall greater binocular summation found for the low spatial frequency chromatic stimuli. These results suggest that similar, oriented processes underlie both chromatic and achromatic binocular contrast combination. The non-oriented detection process found in colour vision at low spatial frequencies under monocular viewing is not evident at the binocular combination stage. PMID:27168119
Closed-form summations of Dowker's and related trigonometric sums
NASA Astrophysics Data System (ADS)
Cvijović, Djurdje; Srivastava, H. M.
2012-09-01
Through a unified and relatively simple approach which uses complex contour integrals, particularly convenient integration contours and calculus of residues, closed-form summation formulas for 12 very general families of trigonometric sums are deduced. One of them is a family of cosecant sums which was first summed in closed form in a series of papers by Dowker (1987 Phys. Rev. D 36 3095-101 1989 J. Math. Phys. 30 770-3 1992 J. Phys. A: Math. Gen. 25 2641-8), whose method has inspired our work in this area. All of the formulas derived here involve the higher-order Bernoulli polynomials. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’.
Nonperturbative dynamics of scalar field theories through the Feynman-Schwinger representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetin Savkli; Franz Gross; John Tjon
2004-04-01
In this paper we present a summary of results obtained for scalar field theories using the Feynman-Schwinger (FSR) approach. Specifically, scalar QED and {chi}{sup 2}{phi} theories are considered. The motivation behind the applications discussed in this paper is to use the FSR method as a rigorous tool for testing the quality of commonly used approximations in field theory. Exact calculations in a quenched theory are presented for one-, two-, and three-body bound states. Results obtained indicate that some of the commonly used approximations, such as Bethe-Salpeter ladder summation for bound states and the rainbow summation for one body problems, producemore » significantly different results from those obtained from the FSR approach. We find that more accurate results can be obtained using other, simpler, approximation schemes.« less
Izbicki, John A.; Groover, Krishangi D.
2018-03-22
This report describes (1) work done between January 2015 and May 2017 as part of the U.S. Geological Survey (USGS) hexavalent chromium, Cr(VI), background study and (2) the summative-scale approach to be used to estimate the extent of anthropogenic (man-made) Cr(VI) and background Cr(VI) concentrations near the Pacific Gas and Electric Company (PG&E) natural gas compressor station in Hinkley, California. Most of the field work for the study was completed by May 2017. The summative-scale approach and calculation of Cr(VI) background were not well-defined at the time the USGS proposal for the background Cr(VI) study was prepared but have since been refined as a result of data collected as part of this study. The proposed summative scale consists of multiple items, formulated as questions to be answered at each sampled well. Questions that compose the summative scale were developed to address geologic, hydrologic, and geochemical constraints on Cr(VI) within the study area. Each question requires a binary (yes or no) answer. A score of 1 will be assigned for an answer that represents data consistent with anthropogenic Cr(VI); a score of –1 will be assigned for an answer that represents data inconsistent with anthropogenic Cr(VI). The areal extent of anthropogenic Cr(VI) estimated from the summative-scale analyses will be compared with the areal extent of anthropogenic Cr(VI) estimated on the basis of numerical groundwater flow model results, along with particle-tracking analyses. On the basis of these combined results, background Cr(VI) values will be estimated for “Mojave-type” deposits, and other deposits, in different parts of the study area outside the summative-scale mapped extent of anthropogenic Cr(VI).
Hernick, Marcy
2015-09-25
Objective. To develop a series of active-learning modules that would improve pharmacy students' performance on summative assessments. Design. A series of optional online active-learning modules containing questions with multiple formats for topics in a first-year (P1) course was created using a test-enhanced learning approach. A subset of module questions was modified and included on summative assessments. Assessment. Student performance on module questions improved with repeated attempts and was predictive of student performance on summative assessments. Performance on examination questions was higher for students with access to modules than for those without access to modules. Module use appeared to have the most impact on low performing students. Conclusion. Test-enhanced learning modules with immediate feedback provide pharmacy students with a learning tool that improves student performance on summative assessments and also may improve metacognitive and test-taking skills.
2015-01-01
Objective. To develop a series of active-learning modules that would improve pharmacy students’ performance on summative assessments. Design. A series of optional online active-learning modules containing questions with multiple formats for topics in a first-year (P1) course was created using a test-enhanced learning approach. A subset of module questions was modified and included on summative assessments. Assessment. Student performance on module questions improved with repeated attempts and was predictive of student performance on summative assessments. Performance on examination questions was higher for students with access to modules than for those without access to modules. Module use appeared to have the most impact on low performing students. Conclusion. Test-enhanced learning modules with immediate feedback provide pharmacy students with a learning tool that improves student performance on summative assessments and also may improve metacognitive and test-taking skills. PMID:27168610
Palmer, Edward J; Devitt, Peter G
2008-01-01
Background Teachers strive to motivate their students to be self-directed learners. One of the methods used is to provide online formative assessment material. The concept of formative assessment and use of these processes is heavily promoted, despite limited evidence as to their efficacy. Methods Fourth year medical students, in their first year of clinical work were divided into four groups. In addition to the usual clinical material, three of the groups were provided with some form of supplementary learning material. For two groups, this was provided as online formative assessment. The amount of time students spent on the supplementary material was measured, their opinion on learning methods was surveyed, and their performance in summative exams at the end of their surgical attachments was measured. Results The performance of students was independent of any educational intervention imposed by this study. Despite its ready availability and promotion, student use of the online formative tools was poor. Conclusion Formative learning is an ideal not necessarily embraced by students. If formative assessment is to work students need to be encouraged to participate, probably by implementing some form of summative assessment. PMID:18471324
Structured assessment of microsurgery skills in the clinical setting.
Chan, WoanYi; Niranjan, Niri; Ramakrishnan, Venkat
2010-08-01
Microsurgery is an essential component in plastic surgery training. Competence has become an important issue in current surgical practice and training. The complexity of microsurgery requires detailed assessment and feedback on skills components. This article proposes a method of Structured Assessment of Microsurgery Skills (SAMS) in a clinical setting. Three types of assessment (i.e., modified Global Rating Score, errors list and summative rating) were incorporated to develop the SAMS method. Clinical anastomoses were recorded on videos using a digital microscope system and were rated by three consultants independently and in a blinded fashion. Fifteen clinical cases of microvascular anastomoses performed by trainees and a consultant microsurgeon were assessed using SAMS. The consultant had consistently the highest scores. Construct validity was also demonstrated by improvement of SAMS scores of microsurgery trainees. The overall inter-rater reliability was strong (alpha=0.78). The SAMS method provides both formative and summative assessment of microsurgery skills. It is demonstrated to be a valid, reliable and feasible assessment tool of operating room performance to provide systematic and comprehensive feedback as part of the learning cycle. Copyright 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Relationships between skin temperature and temporal summation of heat and cold pain.
Mauderli, Andre P; Vierck, Charles J; Cannon, Richard L; Rodrigues, Anthony; Shen, Chiayi
2003-07-01
Temporal summation of heat pain during repetitive stimulation is dependent on C nociceptor activation of central N-methyl-d-aspartate (NMDA) receptor mechanisms. Moderate temporal summation is produced by sequential triangular ramps of stimulation that control skin temperature between heat pulses but do not elicit distinct first and second pain sensations. Dramatic summation of second pain is produced by repeated contact of the skin with a preheated thermode, but skin temperature between taps is not controlled by this procedure. Therefore relationships between recordings of skin temperature and psychophysical ratings of heat pain were evaluated during series of repeated skin contacts. Surface and subcutaneous recordings of skin temperatures revealed efficient thermoregulatory compensation for heat stimulation at interstimulus intervals (ISIs) ranging from 2 to 8 s. Temporal summation of heat pain was strongly influenced by the ISIs and cannot be explained by small increases in skin temperature between taps or by heat storage throughout a stimulus series. Repetitive brief contact with a precooled thermode was utilized to evaluate whether temporal summation of cold pain occurs, and if so, whether it is influenced by skin temperature. Surface and subcutaneous recordings of skin temperature revealed a sluggish thermoregulatory compensation for repetitive cold stimulation. In contrast to heat stimulation, skin temperature did not recover between cold stimuli throughout ISIs of 3-8 s. Psychophysically, repetitive cold stimulation produced an aching pain sensation that progressed gradually and radiated beyond the site of stimulation. The magnitude of aching pain was well related to skin temperature and thus appeared to be established primarily by peripheral factors.
Examining the Use of Web-Based Reusable Learning Objects by Animal and Veterinary Nursing Students
ERIC Educational Resources Information Center
Chapman-Waterhouse, Emily; Silva-Fletcher, Ayona; Whittlestone, Kim David
2016-01-01
This intervention study examined the interaction of animal and veterinary nursing students with reusable learning objects (RLO) in the context of preparing for summative assessment. Data was collected from 199 undergraduates using quantitative and qualitative methods. Students accessed RLO via personal devices in order to reinforce taught…
Analysis and Validation of a Rubric to Assess Oral Presentation Skills in University Contexts
ERIC Educational Resources Information Center
Garcia-Ros, Rafael
2011-01-01
Introduction: The main objective of this study was to analyze users' perceptions and convergent validity of peer- and teacher summative assessment using a rubric for students' oral presentation skills in a university context. Method: Peer- and teacher-assessment convergence was analyzed from an analytical and holistic perspective. Students'…
ERIC Educational Resources Information Center
Deignan, Gerard M.; And Others
This report contains a comparative analysis of the differential effectiveness of computer-assisted instruction (CAI), programmed instructional text (PIT), and lecture methods of instruction in three medical courses--Medical Laboratory, Radiology, and Dental. The summative evaluation includes (1) multiple regression analyses conducted to predict…
Using Technology to Facilitate Effective Assessment for Learning and Feedback in Higher Education
ERIC Educational Resources Information Center
Deeley, Susan J.
2018-01-01
The aims of this paper are to examine and critically evaluate a selection of different technological methods that were specifically chosen for their alignment with, and potential to enhance, extant assessment for learning practice. The underpinning perspectives are that: (a) both formative and summative assessment are valuable opportunities for…
Training Programme for Secondary School Principals: Evaluating its Effectiveness and Impact
ERIC Educational Resources Information Center
Hutton, Disraeli M.
2013-01-01
The article presents the evaluation of the training programme for secondary school principals conducted in the period between 2006 and 2009. A mixed method approach was used to conduct the summative evaluation with 28 graduate participants. For the impact evaluation, 15 of the graduates were interviewed three years after the programme was…
Collateral Learning and Mathematical Education of Teachers
ERIC Educational Resources Information Center
Abramovich, Sergei
2012-01-01
This article explores the notion of collateral learning in the context of classic ideas about the summation of powers of the first "n" counting numbers. Proceeding from the well-known legend about young Gauss, this article demonstrates the value of reflection under the guidance of "the more knowledgeable other" as a pedagogical method of making…
Profiling Students for Remediation Using Latent Class Analysis
ERIC Educational Resources Information Center
Boscardin, Christy K.
2012-01-01
While clinical exams using SPs are used extensively across the medical schools for summative purposes and high-stakes decisions, the method of identifying students for remediation varies widely and there is a lack of consensus on the best methodological approach. The purpose of this study is to provide an alternative approach to identification of…
DNA Polymorphism: A Comparison of Force Fields for Nucleic Acids
Reddy, Swarnalatha Y.; Leclerc, Fabrice; Karplus, Martin
2003-01-01
The improvements of the force fields and the more accurate treatment of long-range interactions are providing more reliable molecular dynamics simulations of nucleic acids. The abilities of certain nucleic acid force fields to represent the structural and conformational properties of nucleic acids in solution are compared. The force fields are AMBER 4.1, BMS, CHARMM22, and CHARMM27; the comparison of the latter two is the primary focus of this paper. The performance of each force field is evaluated first on its ability to reproduce the B-DNA decamer d(CGATTAATCG)2 in solution with simulations in which the long-range electrostatics were treated by the particle mesh Ewald method; the crystal structure determined by Quintana et al. (1992) is used as the starting point for all simulations. A detailed analysis of the structural and solvation properties shows how well the different force fields can reproduce sequence-specific features. The results are compared with data from experimental and previous theoretical studies. PMID:12609851
DOT National Transportation Integrated Search
1991-12-01
The objective of this summative evaluation of the Airway Science Curriculum Demonstration Project (ASCDP) was to compare the performance, job attitudes, retention rates, and perceived supervisory potential of graduates from recognized Airway Science ...
Two photon excitation of atomic oxygen
NASA Technical Reports Server (NTRS)
Pindzola, M. S.
1977-01-01
A standard perturbation expansion in the atom-radiation field interaction is used to calculate the two photon excitation cross section for 1s(2) 2s(2) 2p(4) p3 to 1s(2) 2s(2) 2p(3) (s4) 3p p3 transition in atomic oxygen. The summation over bound and continuum intermediate states is handled by solving the equivalent inhomogeneous differential equation. Exact summation results differ by a factor of 2 from a rough estimate obtained by limiting the intermediate state summation to one bound state. Higher order electron correlation effects are also examined.
Multiresolution molecular mechanics: Surface effects in nanoscale materials
NASA Astrophysics Data System (ADS)
Yang, Qingcheng; To, Albert C.
2017-05-01
Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), [57]) is applied to capture surface effect for nanosized structures by designing a surface summation rule SRS within the framework of MMM. Combined with previously proposed bulk summation rule SRB, the MMM summation rule SRMMM is completed. SRS and SRB are consistently formed within SRMMM for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SRMMM lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SRS and SRB are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SRMMM accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SRMMM with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SRMMM that is analogous to numerical integration error with quadrature rule in FEM is very small.
Impact of teaching and assessment format on electrocardiogram interpretation skills.
Raupach, Tobias; Hanneforth, Nathalie; Anders, Sven; Pukrop, Tobias; Th J ten Cate, Olle; Harendza, Sigrid
2010-07-01
Interpretation of the electrocardiogram (ECG) is a core clinical skill that should be developed in undergraduate medical education. This study assessed whether small-group peer teaching is more effective than lectures in enhancing medical students' ECG interpretation skills. In addition, the impact of assessment format on study outcome was analysed. Two consecutive cohorts of Year 4 medical students (n=335) were randomised to receive either traditional ECG lectures or the same amount of small-group, near-peer teaching during a 6-week cardiorespiratory course. Before and after the course, written assessments of ECG interpretation skills were undertaken. Whereas this final assessment yielded a considerable amount of credit points for students in the first cohort, it was merely formative in nature for the second cohort. An unannounced retention test was applied 8 weeks after the end of the cardiovascular course. A significant advantage of near-peer teaching over lectures (effect size 0.33) was noted only in the second cohort, whereas, in the setting of a summative assessment, both teaching formats appeared to be equally effective. A summative instead of a formative assessment doubled the performance increase (Cohen's d 4.9 versus 2.4), mitigating any difference between teaching formats. Within the second cohort, the significant difference between the two teaching formats was maintained in the retention test (p=0.017). However, in both cohorts, a significant decrease in student performance was detected during the 8 weeks following the cardiovascular course. Assessment format appeared to be more powerful than choice of instructional method in enhancing student learning. The effect observed in the second cohort was masked by an overriding incentive generated by the summative assessment in the first cohort. This masking effect should be considered in studies assessing the effectiveness of different teaching methods.
The assessment of a structured online formative assessment program: a randomised controlled trial
2014-01-01
Background Online formative assessment continues to be an important area of research and methods which actively engage the learner and provide useful learning outcomes are of particular interest. This study reports on the outcomes of a two year study of medical students using formative assessment tools. Method The study was conducted over two consecutive years using two different strategies for engaging students. The Year 1 strategy involved voluntary use of the formative assessment tool by 129 students. In Year 2, a second cohort of 130 students was encouraged to complete the formative assessment by incorporating summative assessment elements into it. Outcomes from pre and post testing students around the formative assessment intervention were used as measures of learning. To compare improvement scores between the two years a two-way Analysis of Variance (ANOVA) model was fitted to the data. Results The ANOVA model showed that there was a significant difference in improvement scores between students in the two years (mean improvement percentage 19% vs. 38.5%, p < 0.0001). Students were more likely to complete formative assessment items if they had a summative component. In Year 2, the time spent using the formative assessment tool had no impact on student improvement, nor did the number of assessment items completed. Conclusion The online medium is a valuable learning resource, capable of providing timely formative feedback and stimulating student-centered learning. However the production of quality content is a time-consuming task and careful consideration must be given to the strategies employed to ensure its efficacy. Course designers should consider the potential positive impact summative components to formative assessment may have on student engagement and outcomes. PMID:24400883
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley
2013-01-01
Background Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. Methods In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. Results The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Conclusions Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. PMID:24006097
Enhanced Detectability of Community Structure in Multilayer Networks through Layer Aggregation.
Taylor, Dane; Shai, Saray; Stanley, Natalie; Mucha, Peter J
2016-06-03
Many systems are naturally represented by a multilayer network in which edges exist in multiple layers that encode different, but potentially related, types of interactions, and it is important to understand limitations on the detectability of community structure in these networks. Using random matrix theory, we analyze detectability limitations for multilayer (specifically, multiplex) stochastic block models (SBMs) in which L layers are derived from a common SBM. We study the effect of layer aggregation on detectability for several aggregation methods, including summation of the layers' adjacency matrices for which we show the detectability limit vanishes as O(L^{-1/2}) with increasing number of layers, L. Importantly, we find a similar scaling behavior when the summation is thresholded at an optimal value, providing insight into the common-but not well understood-practice of thresholding pairwise-interaction data to obtain sparse network representations.
Analysis of the Daya Bay Reactor Antineutrino Flux Changes with Fuel Burnup
NASA Astrophysics Data System (ADS)
Hayes, A. C.; Jungman, Gerard; McCutchan, E. A.; Sonzogni, A. A.; Garvey, G. T.; Wang, X. B.
2018-01-01
We investigate the recent Daya Bay results on the changes in the antineutrino flux and spectrum with the burnup of the reactor fuel. We find that the discrepancy between current model predictions and the Daya Bay results can be traced to the original measured
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.; ...
2017-03-16
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
The Challenge of Evaluating Action Learning
ERIC Educational Resources Information Center
Edmonstone, John
2015-01-01
The paper examines the benefits claimed for action learning at individual, organisational and inter-organisational levels. It goes on to identify both generic difficulties in evaluating development programmes and action learning specifically. The distinction between formative and summative evaluation is considered and a summative evaluation…
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.
2013-01-01
Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.
Scanlon, Dennis P; Wolf, Laura J; Alexander, Jeffrey A; Christianson, Jon B; Greene, Jessica; Jean-Jacques, Muriel; McHugh, Megan; Shi, Yunfeng; Leitzell, Brigitt; Vanderbrink, Jocelyn M
2016-08-01
The Aligning Forces for Quality (AF4Q) initiative was the Robert Wood Johnson Foundation's (RWJF's) signature effort to increase the overall quality of healthcare in targeted communities throughout the country. In addition to sponsoring this 16-site complex program, RWJF funded an independent scientific evaluation to support objective research on the initiative's effectiveness and contributions to basic knowledge in 5 core programmatic areas. The research design, data, and challenges faced during the summative evaluation phase of this near decade-long program are discussed. A descriptive overview of the summative research design and its development for a multi-site, community-based, healthcare quality improvement initiative is provided. The summative research design employed by the evaluation team is discussed. The evaluation team's summative research design involved a data-driven assessment of the effectiveness of the AF4Q program at large, assessments of the impact of AF4Q in the specific programmatic areas, and an assessment of how the AF4Q alliances were positioned for the future at the end of the program. The AF4Q initiative was the largest privately funded community-based healthcare improvement initiative in the United States to date and was implemented at a time of rapid change in national healthcare policy. The implementation of large-scale, multi-site initiatives is becoming an increasingly common approach for addressing problems in healthcare. The summative evaluation research design for the AF4Q initiative, and the lessons learned from its approach, may be valuable to others tasked with evaluating similarly complex community-based initiatives.
NASA Astrophysics Data System (ADS)
Charbonneau, Jeremy
As the perceived quality of a product is becoming more important in the manufacturing industry, more emphasis is being placed on accurately predicting the sound quality of everyday objects. This study was undertaken to improve upon current prediction techniques with regard to the psychoacoustic descriptor of loudness and an improved binaural summation technique. The feasibility of this project was first investigated through a loudness matching experiment involving thirty-one subjects and pure tones of constant sound pressure level. A dependence of binaural summation on frequency was observed which had previously not been a subject of investigation in the reviewed literature. A follow-up investigation was carried out with forty-eight volunteers and pure tones of constant sensation level. Contrary to existing theories in literature the resulting loudness matches revealed an amplitude versus frequency relationship which confirmed the perceived increase in loudness when a signal was presented to both ears simultaneously as opposed to one ear alone. The resulting trend strongly indicated that the higher the frequency of the presented signal, the greater the increase in observed binaural summation. The results from each investigation were summarized into a single binaural summation algorithm and inserted into an improved time-varying loudness model. Using experimental techniques, it was demonstrated that the updated binaural summation algorithm was a considerable improvement over the state of the art approach for predicting the perceived binaural loudness. The improved function retained the ease of use from the original model while additionally providing accurate estimates of diotic listening conditions from monaural WAV files. It was clearly demonstrated using a validation jury test that the revised time-varying loudness model was a significant improvement over the previously standardized approach.
Feizi, Sepehr; Delfazayebaher, Siamak; Ownagh, Vahid; Sadeghpour, Fatemeh
To evaluate the agreement between total corneal astigmatism calculated by vector summation of anterior and posterior corneal astigmatism (TCA Vec ) and total corneal astigmatism measured by ray tracing (TCA Ray ). This study enrolled a total of 204 right eyes of 204 normal subjects. The eyes were measured using a Galilei double Scheimpflug analyzer. The measured parameters included simulated keratometric astigmatism using the keratometric index, anterior corneal astigmatism using the corneal refractive index, posterior corneal astigmatism, and TCA Ray . TCA Vec was derived by vector summation of the astigmatism on the anterior and posterior corneal surfaces. The magnitudes and axes of TCA Vec and TCA Ray were compared. The Pearson correlation coefficient and Bland-Altman plots were used to assess the relationship and agreement between TCA Vec and TCA Ray , respectively. The mean TCA Vec and TCA Ray magnitudes were 0.76±0.57D and 1.00±0.78D, respectively (P<0.001). The mean axis orientations were 85.12±30.26° and 89.67±36.76°, respectively (P=0.02). Strong correlations were found between the TCA Vec and TCA Ray magnitudes (r=0.96, P<0.001). Moderate associations were observed between the TCA Vec and TCA Ray axes (r=0.75, P<0.001). Bland-Altman plots produced the 95% limits of agreement for the TCA Vec and TCA Ray magnitudes from -0.33 to 0.82D. The 95% limits of agreement between the TCA Vec and TCA Ray axes was -43.0 to 52.1°. The magnitudes and axes of astigmatisms measured by the vector summation and ray tracing methods cannot be used interchangeably. There was a systematic error between the TCA Vec and TCA Ray magnitudes. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
High-energy laser-summator based on Raman scattering principle
NASA Astrophysics Data System (ADS)
Eugeniy Mikhalovich, Zemskov; Zarubin, Peter Vasilievich; Cook, Joung
2013-02-01
This paper is a summary of the history, theory, and development efforts of summator, an all-in-one device that coherently combines multiple high-power laser beams, lowers the beam divergence, and shifts the wavelength based on stimulated Raman scattering principle in USSR from early 1960s to late 1970s. This was a part of the Terra-3 program, which was an umbrella program of highly classified high-energy laser weapons development efforts. Some parts of the Terra-3 program, specifically the terminal missile defense portion, were declassified recently, including the information on summator development efforts.
IRREVERSIBLE PROCESSES IN A PLASMA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.
1959-04-01
ABS>The characteristic divergences caused by long-range phenomena in gases can be eliminated in equilibrium situations by partial summations of terms individually divergent but whose sum converges. It is shown how the recently developed diagram technique enables treatment of non-equilibrium cases by a rigorous asymptotic method. The general ideas underlying the approach are briefly indicated. (T.R. H.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiNunzio, Camillo A.; Gupta, Abhinav; Golay, Michael
2002-11-30
This report presents a summation of the third and final year of a three-year investigation into methods and technologies for substantially reducing the capital costs and total schedule for future nuclear plants. In addition, this is the final technical report for the three-year period of studies.
ERIC Educational Resources Information Center
Mallavarapu, Aditi; Lyons, Leilah; Shelley, Tia; Minor, Emily; Slattery, Brian; Zellner, Moria
2015-01-01
Interactive learning environments can provide learners with opportunities to explore rich, real-world problem spaces, but the nature of these problem spaces can make assessing learner progress difficult. Such assessment can be useful for providing formative and summative feedback to the learners, to educators, and to the designers of the…
ERIC Educational Resources Information Center
Luce, Christine; Kirnan, Jean P.
2016-01-01
Contradictory results have been reported regarding the accuracy of various methods used to assess student learning in higher education. The current study examined student learning outcomes across a multi-section and mult-iinstructor psychology research course with both indirect and direct assessments in a sample of 67 undergraduate students. The…
Code of Federal Regulations, 2014 CFR
2014-07-01
... Study of Procedures Evaluated by the Federal Advisory Committee on Detection and Quantitation Approaches... placed in a heated ultrasonic bath for one hour to facilitate the extraction of Pb. Following... the summation of signal intensities for the isotopic masses 206, 207, and 208. In most cases, the...
Technology Experiences of Student Interns in a One to One Mobile Program
ERIC Educational Resources Information Center
Cullen, Theresa A.; Karademir, Tugra
2018-01-01
This article describes how a group of student intern teachers (n = 51) in a one to one teacher education iPad program were asked to reflect using Experience Sampling Method (ESM) on their use of technology in the classroom during internship. Interns also completed summative reflections and class discussions. Data collected both in online and…
Mathematics, Experiments, and Theoretical Physics: The Early Days of the Sommerfeld School
NASA Astrophysics Data System (ADS)
Eckert, Michael
1999-10-01
The names of his students read like a Who's Who of the pioneers in modern physics Peter Debye, Peter Paul Ewald, Wolfgang Pauli, Werner Heisenberg, Hans A. Bethe - to name only the most prominent. In retrospect, the success of Sommerfeld's school of modern theoretical physics tends to overshadow its less glorious beginnings. A century ago, theoretical physics was not yet considered as a distinct discipline. In this article I emphasize more the haphazard beginnings than the later achievements of Sommerfeld's school, which mirrored the state of theoretical physics before it became an independent discipline.
ERIC Educational Resources Information Center
Schattschneider, Doris
1991-01-01
Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…
Automated Formative Feedback and Summative Assessment Using Individualised Spreadsheet Assignments
ERIC Educational Resources Information Center
Blayney, Paul; Freeman, Mark
2004-01-01
This paper reports on the effects of automating formative feedback at the student's discretion and automating summative assessment with individualised spreadsheet assignments. Quality learning outcomes are achieved when students adopt deep approaches to learning (Ramsden, 2003). Learning environments designed to align assessment to learning…
Evaluation Techniques for Individualized Instruction: Revision and Summative Evaluation.
ERIC Educational Resources Information Center
Englert, DuWayne C.; And Others
1987-01-01
Describes the collection and analysis of summative data for a slide/tape self-instruction program on an introductory zoology course. Points of concern regarding curricular changes, teaching assistant training, student use, student performance evaluation, development, and revision are discussed from their unique perspectives within the summative…
New Proofs of Some q-Summation and q-Transformation Formulas
Liu, Xian-Fang; Bi, Ya-Qing; Luo, Qiu-Ming
2014-01-01
We obtain an expectation formula and give the probabilistic proofs of some summation and transformation formulas of q-series based on our expectation formula. Although these formulas in themselves are not the probability results, the proofs given are based on probabilistic concepts. PMID:24895675
Summation of visual motion across eye movements reflects a nonspatial decision mechanism.
Morris, Adam P; Liu, Charles C; Cropper, Simon J; Forte, Jason D; Krekelberg, Bart; Mattingley, Jason B
2010-07-21
Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
Radjaeipour, G; Chambers, D W; Geissberger, M
2016-11-01
The study explored the effects of adding student-directed projects in pre-clinical dental anatomy laboratory on improving the predictability of students' eventual performance on summative evaluation exercises, given the presence of intervening faculty-controlled, in-class practice. All students from four consecutive classes (n = 555) completed wax-added home projects (HP), spending as much or as little time as desired and receiving no faculty feedback; followed by similar laboratory projects (LP) with time limits and feedback; and then summative practical projects (PP) in a timed format but without faculty feedback. Path analysis was used to assess if the student-directed HP had any effect over and above the laboratory projects. Average scores were HP = 0.785 (SD = 0.089); LP = 0.736 (SD = 0.092); and PP = 0.743 (SD = 0.108). Path analysis was applied to show the effects of including a student-controlled home practice exercise on summative exercise performance. HP contributed 57% direct effect and 37% mediated effect through the LP condition. Student-directed home practice provided a measureable improvement in ability to predict eventual performance in summative test cases over and above the predictive contribution of intervening faculty-controlled practice conditions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Automating Formative and Summative Feedback for Individualised Assignments
ERIC Educational Resources Information Center
Hamilton, Ian Robert
2009-01-01
Purpose: The purpose of this paper is to report on the rationale behind the use of a unique paper-based individualised accounting assignment, which automated the provision to students of immediate formative and timely summative feedback. Design/methodology/approach: As students worked towards completing their assignment, the package provided…
42 CFR 422.1052 - Oral and written summation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Oral and written summation. 422.1052 Section 422.1052 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES.... Copies of any briefs or other written statements must be sent in accordance with 422.1016. ...
42 CFR 422.1052 - Oral and written summation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Oral and written summation. 422.1052 Section 422.1052 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES.... Copies of any briefs or other written statements must be sent in accordance with 422.1016. ...
"Formative Good, Summative Bad?"--A Review of the Dichotomy in Assessment Literature
ERIC Educational Resources Information Center
Lau, Alice Man Sze
2016-01-01
The debate between summative and formative assessment is creating a situation that increasingly calls to mind the famous slogan in George Orwell's (1945) "Animal Farm"--"Four legs good, two legs bad". Formative assessment is increasingly being portrayed in the literature as "good" assessment, which tutors should…
ERIC Educational Resources Information Center
Perera, Luckmika; Nguyen, Hoa; Watty, Kim
2014-01-01
This paper investigates the effectiveness (measured using assignment and examination performance) of an assessment design incorporating formative feedback through summative tutorial-based assessments to improve student performance, in a second-year Finance course at an Australian university. Data was collected for students who were enrolled in an…
Decisions and Tensions: Summative Assessments in PBL Advanced Placement Classes
ERIC Educational Resources Information Center
Cooper, Susan Elizabeth
2017-01-01
This study examines how teachers navigate tensions between communicating expectations for college level work and student motivation as they determined summative grades in an AP Environmental Science course. Semi-structured interviews and a think-aloud protocol were conducted while teachers in poverty-impacted urban high schools determined final…
Sea Ice Detection Based on Differential Delay-Doppler Maps from UK TechDemoSat-1
Zhu, Yongchao; Yu, Kegen; Zou, Jingui; Wickert, Jens
2017-01-01
Global Navigation Satellite System (GNSS) signals can be exploited to remotely sense atmosphere and land and ocean surface to retrieve a range of geophysical parameters. This paper proposes two new methods, termed as power-summation of differential Delay-Doppler Maps (PS-D) and pixel-number of differential Delay-Doppler Maps (PN-D), to distinguish between sea ice and sea water using differential Delay-Doppler Maps (dDDMs). PS-D and PN-D make use of power-summation and pixel-number of dDDMs, respectively, to measure the degree of difference between two DDMs so as to determine the transition state (water-water, water-ice, ice-ice and ice-water) and hence ice and water are detected. Moreover, an adaptive incoherent averaging of DDMs is employed to improve the computational efficiency. A large number of DDMs recorded by UK TechDemoSat-1 (TDS-1) over the Arctic region are used to test the proposed sea ice detection methods. Through evaluating against ground-truth measurements from the Ocean Sea Ice SAF, the proposed PS-D and PN-D methods achieve a probability of detection of 99.72% and 99.69% respectively, while the probability of false detection is 0.28% and 0.31% respectively. PMID:28704948
Multi-input and binary reproducible, high bandwidth floating point adder in a collective network
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip; Steinmacher-Burow, Burkhard
2016-11-15
To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device.
Binocular summation and peripheral visual response time
NASA Technical Reports Server (NTRS)
Gilliland, K.; Haines, R. F.
1975-01-01
Six males were administered a peripheral visual response time test to the onset of brief small stimuli imaged in 10-deg arc separation intervals across the dark adapted horizontal retinal meridian under both binocular and monocular viewing conditions. This was done in an attempt to verify the existence of peripheral binocular summation using a response time measure. The results indicated that from 50-deg arc right to 50-deg arc left of the line of sight binocular summation is a reasonable explanation for the significantly faster binocular data. The stimulus position by viewing eye interaction was also significant. A discussion of these and other analyses is presented along with a review of related literature.
NASA Astrophysics Data System (ADS)
Saintillan, David; Darve, Eric; Shaqfeh, Eric S. G.
2005-03-01
Large-scale simulations of non-Brownian rigid fibers sedimenting under gravity at zero Reynolds number have been performed using a fast algorithm. The mathematical formulation follows the previous simulations by Butler and Shaqfeh ["Dynamic simulations of the inhomogeneous sedimentation of rigid fibres," J. Fluid Mech. 468, 205 (2002)]. The motion of the fibers is described using slender-body theory, and the line distribution of point forces along their lengths is approximated by a Legendre polynomial in which only the total force, torque, and particle stresslet are retained. Periodic boundary conditions are used to simulate an infinite suspension, and both far-field hydrodynamic interactions and short-range lubrication forces are considered in all simulations. The calculation of the hydrodynamic interactions, which is typically the bottleneck for large systems with periodic boundary conditions, is accelerated using a smooth particle-mesh Ewald (SPME) algorithm previously used in molecular dynamics simulations. In SPME the slowly decaying Green's function is split into two fast-converging sums: the first involves the distribution of point forces and accounts for the singular short-range part of the interactions, while the second is expressed in terms of the Fourier transform of the force distribution and accounts for the smooth and long-range part. Because of its smoothness, the second sum can be computed efficiently on an underlying grid using the fast Fourier transform algorithm, resulting in a significant speed-up of the calculations. Systems of up to 512 fibers were simulated on a single-processor workstation, providing a different insight into the formation, structure, and dynamics of the inhomogeneities that occur in sedimenting fiber suspensions.
The Effectiveness of Contextual Learning on Physics Achievement in Career Technical Education
NASA Astrophysics Data System (ADS)
Arcand, Scott Andrew
The purpose of this casual-comparative study was to determine if students being taught the Minnesota Science Physics Standards via contextual learning methods in Project Lead the Way (PLTW) Principles of Engineering or the PLTW Aerospace Engineering courses, taught by a Career Technical Education (CTE) teacher, achieve at the same rate as students in a physics course taught by a science teacher. The PLTW courses only cover the standards taught in the first trimester of physics. The PLTW courses are two periods long for one trimester. Students who successfully pass the PLTW Principles of Engineering course or the PLTW Engineering Aerospace course earn one-half credit in physics and one-half elective credit. The instrument used to measure student achievement was the district common summative assessment for physics. The Common Summative Assessment scores were pulled from the data warehouse from the first trimester of the 2013-2014 school year. Implications of the research address concepts of contextual learning especially in the Career Technical Education space. The mean score for Physics students (30.916) and PLTW Principles of Engineering students (32.333) was not statistically significantly different. Students in PLTW Principles of Engineering achieved at the same rate as students in physics. Due to the low rate of students participating in the Common Summative Assessment in PTLW Aerospace (four out of seven students), there is not enough data to determine if there is a significant difference in the Physics A scores and PLTW Aerospace Engineering scores.
ERIC Educational Resources Information Center
Edwards, Frances
2017-01-01
Teachers require specialised assessment knowledge and skills in order to effectively assess student learning. These knowledge and skills develop over time through ongoing teacher learning and experiences. The first part of this paper presents a Summative Assessment Literacy Rubric (SALRubric) constructed to track the development of secondary…
Summative Evaluation of the Manukau Family Literacy Project, 2004
ERIC Educational Resources Information Center
Benseman, John Robert; Sutton, Alison Joy
2005-01-01
This report covers a summative evaluation of a family literacy project in Auckland, New Zealand. The evaluation covered 70 adults and their children over a two year period. Outcomes for the program included literacy skill gains for both adults and children, increased levels of self-confidence and self-efficacy, greater parental involvement in…
The Mechanism of Impact of Summative Assessment on Medical Students' Learning
ERIC Educational Resources Information Center
Cilliers, Francois J.; Schuwirth, Lambert W.; Adendorff, Hanelie J.; Herman, Nicoline; van der Vleuten, Cees P.
2010-01-01
It has become axiomatic that assessment impacts powerfully on student learning, but there is a surprising dearth of research on how. This study explored the mechanism of impact of summative assessment on the process of learning of theory in higher education. Individual, in-depth interviews were conducted with medical students and analyzed…
A Model of the Pre-Assessment Learning Effects of Summative Assessment in Medical Education
ERIC Educational Resources Information Center
Cilliers, Francois J.; Schuwirth, Lambert W. T.; Herman, Nicoline; Adendorff, Hanelie J.; van der Vleuten, Cees P. M.
2012-01-01
It has become axiomatic that assessment impacts powerfully on student learning. However, surprisingly little research has been published emanating from authentic higher education settings about the nature and mechanism of the pre-assessment learning effects of summative assessment. Less still emanates from health sciences education settings. This…
Food...Your Choice. Levels 1, 2, and 3. Summative Evaluation. Technical Report No. 98.
ERIC Educational Resources Information Center
Illinois Univ., Chicago. Chicago Circle Campus.
A summative evaluation study of the National Dairy Council's learning system, "Food...Your Choice," was designed to address three concerns: (1) effectiveness of the learning system in enhancing elementary school students' understanding of nutrition; (2) effect of instruction in nutrition on student attitudes about nutrition and student behavior in…
Instructor Perspectives of Multiple-Choice Questions in Summative Assessment for Novice Programmers
ERIC Educational Resources Information Center
Shuhidan, Shuhaida; Hamilton, Margaret; D'Souza, Daryl
2010-01-01
Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with…
ERIC Educational Resources Information Center
Mottier Lopez, Lucie; Pasquini, Raphaël
2017-01-01
This article describes two collaborative research projects whose common goal was to explore the potential role of professional controversies in building teachers' summative assessment capacity. In the first project, upper primary teachers were encouraged to compare their practices through a form of social moderation, without prior instructor input…
ERIC Educational Resources Information Center
Bijsterbosch, Erik; van der Schee, Joop; Kuiper, Wilmad
2017-01-01
Enhancing meaningful learning is an important aim in geography education. Also, assessment should reflect this aim. Both formative and summative assessments contribute to meaningful learning when more complex knowledge and cognitive processes are assessed. The internal school-based geography examinations of the final exam in pre-vocational…
Technology-Supported Formative and Summative Assessment of Collaborative Scientific Inquiry.
ERIC Educational Resources Information Center
Hickey, Daniel T.; DeCuir, Jessica; Hand, Bryon; Kyser, Brandon; Laprocina, Simona; Mordica, Joy
This study defined and validated a new set of dimensions, new anchoring descriptions, and a new rubric format for assessing participation in collaboration. One strand of the research explored the use of analog video-technology to conduct summative assessment of collaborative inquiry. The second strand of the research explored the use of video…
42 CFR 423.1052 - Oral and written summation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Oral and written summation. 423.1052 Section 423.1052 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... conclusions of law. Copies of any briefs or other written statements must be sent in accordance with 423.1016. ...
42 CFR 423.1052 - Oral and written summation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Oral and written summation. 423.1052 Section 423.1052 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... conclusions of law. Copies of any briefs or other written statements must be sent in accordance with 423.1016. ...
A Formative and Summative Evaluation of Computer Integrated Instruction.
ERIC Educational Resources Information Center
Signer, Barbara
The purpose of this study was to conduct formative and summative evaluation for Computer Integrated Instruction (CII), an alternative use of computer-assisted instruction (CAI). The non-equivalent control group, pretest-posttest design was implemented with the class as the unit of analysis. Several of the instruments were adopted from existing CAI…
High School Students' Perceptions of Narrative Evaluations as Summative Assessment
ERIC Educational Resources Information Center
Bagley, Sylvia S.
2008-01-01
This study focuses on data collected at "Progressive Secondary School" in Southern California, a high school which uses narrative evaluations and other forms of alternative summative assessment on a school wide basis. Through a survey and personal interviews, students were asked to describe what they liked most and least about the use of…
Formative Assessment in a Test-Dominated Context: How Test Practice Can Become More Productive
ERIC Educational Resources Information Center
Xiao, Yangyu
2017-01-01
In recent years, increasing attention has been paid to the roles that assessment plays in promoting learning. Formative assessment is considered a powerful strategy for improving student learning; however, its learning potential has been less extensively explored in contexts where summative assessment dominates, because summative assessment is…
On Serving Two Masters: Formative and Summative Teacher Evaluation
ERIC Educational Resources Information Center
Popham, W. James
2013-01-01
This article begins by clarifying the distinction between formative and summative evaluation that was first drawn by Michael Scriven (1967) in an influential essay regarding education evaluation. Scriven supplied his analysis soon after the Elementary and Secondary Education Act (ESEA) was enacted in 1965--a time when almost no serious attention…
Summative Evaluation of the Foreign Credential Recognition Program. Final Report
ERIC Educational Resources Information Center
Human Resources and Skills Development Canada, 2010
2010-01-01
A summative evaluation of the Foreign Credential Recognition Program (FCRP) funded by Human Resources and Skills Development Canada (HRSDC) was conducted during the spring, summer and fall of 2008. The main objective of the evaluation was to measure the relevance, impacts, and cost-effectiveness of the program. Given the timing of the evaluation…
Using Summative and Formative Assessments to Evaluate EFL Teachers' Teaching Performance
ERIC Educational Resources Information Center
Wei, Wei
2015-01-01
Using classroom observations (formative) and student course experience survey results (summative) to evaluate English lecturers' teaching performances is not new in practice, but surprisingly only a few studies have investigated this issue in a higher education context. This study was conducted in an English department of a large university in…
Bento and Buffet: Two Approaches to Flexible Summative Assessment
ERIC Educational Resources Information Center
Didicher, Nicky
2016-01-01
This practice-sharing piece outlines two main approaches to flexible summative assessment schemes, including for each approach one example from my practice and another from a published study. The bento approach offers the same assessments to all students but a variety of grade weighting schemes, allowing students to change weighting during the…
ERIC Educational Resources Information Center
Orsini, Muhsin Michael; Wyrick, David L.; Milroy, Jeffrey J.
2012-01-01
Blending high-quality and rigorous research with pure evaluation practice can often be best accomplished through thoughtful collaboration. The evaluation of a high school drug prevention program (All Stars Senior) is an example of how perceived competing purposes and methodologies can coexist to investigate formative and summative outcome…
Developing and Modeling Fiber Amplifier Arrays
2006-09-01
15. A. E. Siegman , Lasers , University Science Books, ISBN 0-935702-11-5, pg 734. 16. Jianye Lu, et al, “A new method of coherent summation of laser ... Laser Division This report is published in the interest of scientific and technical information exchange...STATEMENT Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT High Energy Lasers (HEL) are required for
ERIC Educational Resources Information Center
DeKorver, Brittland K.; Choi, Mark; Towns, Marcy
2017-01-01
Chemical demonstration shows are a popular form of informal science education (ISE), employed by schools, museums, and other institutions in order to improve the public's understanding of science. Just as teachers employ formative and summative assessments in the science classroom to evaluate the impacts of their efforts, it is important to assess…
Rapid Methods for the Laboratory Identification of Pathogenic Microorganisms.
1981-09-01
Preliminary results provide strong evidence to show that the fungi, Candida and Cryptococcus , can be raoidly differentiated by a lectin test. SFor Oro...SUMMATION LECTIN-YEAST INTERACTIONS Objective: To find a lectin that selectively agglutinates Cryptococcus neoformans (the etiologic agent of...peanut), Conavalia ensiformis (Con A) and mango extract may potentially be utilized to differentiate Cryptococcus from the other yeasts most commonly
Reliability of temporal summation and diffuse noxious inhibitory control
Cathcart, Stuart; Winefield, Anthony H; Rolan, Paul; Lushington, Kurt
2009-01-01
BACKGROUND: The test-retest reliability of temporal summation (TS) and diffuse noxious inhibitory control (DNIC) has not been reported to date. Establishing such reliability would support the possibility of future experimental studies examining factors affecting TS and DNIC. Similarly, the use of manual algometry to induce TS, or an occlusion cuff to induce DNIC of TS to mechanical stimuli, has not been reported to date. Such devices may offer a simpler method than current techniques for inducing TS and DNIC, affording assessment at more anatomical locations and in more varied research settings. METHOD: The present study assessed the test-retest reliability of TS and DNIC using the above techniques. Sex differences on these measures were also investigated. RESULTS: Repeated measures ANOVA indicated successful induction of TS and DNIC, with no significant differences across test-retest occasions. Sex effects were not significant for any measure or interaction. Intraclass correlations indicated high test-retest reliability for all measures; however, there was large interindividual variation between test and retest measurements. CONCLUSION: The present results indicate acceptable within-session test-retest reliability of TS and DNIC. The results support the possibility of future experimental studies examining factors affecting TS and DNIC. PMID:20011713
Specific activity and isotope abundances of strontium in purified strontium-82
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzsimmons, J. M.; Medvedev, D. G.; Mausner, L. F.
2015-11-12
A linear accelerator was used to irradiate a rubidium chloride target with protons to produce strontium-82 (Sr-82), and the Sr-82 was purified by ion exchange chromatography. The amount of strontium associated with the purified Sr-82 was determined by either: ICP-OES or method B which consisted of a summation of strontium quantified by gamma spectroscopy and ICP-MS. The summation method agreed within 10% to the ICP-OES for the total mass of strontium and the subsequent specific activities were determined to be 0.25–0.52 TBq mg -1. Method B was used to determine the isotope abundances by weight% of the purified Sr-82, andmore » the abundances were: Sr-82 (10–20.7%), Sr-83 (0–0.05%), Sr-84 (35–48.5%), Sr-85 (16–25%), Sr-86 (12.5–23%), Sr-87 (0%), and Sr-88 (0–10%). The purified strontium contained mass amounts of Sr-82, Sr-84, Sr-85, Sr-86, and Sr-88 in abundances not associated with natural abundance, and 90% of the strontium was produced by the proton irradiation. A comparison of ICP-OES and method B for the analysis of Sr-82 indicated analysis by ICP-OES would be easier to determine total mass of strontium and comply with regulatory requirements. An ICP-OES analytical method for Sr-82 analysis was established and validated according to regulatory guidelines.« less
Three-Dimensional Ankle Moments and Nonlinear Summation of Rat Triceps Surae Muscles
Tijs, Chris; van Dieën, Jaap H.; Baan, Guus C.; Maas, Huub
2014-01-01
The Achilles tendon and epimuscular connective tissues mechanically link the triceps surae muscles. These pathways may cause joint moments exerted by each muscle individually not to sum linearly, both in magnitude and direction. The aims were (i) to assess effects of sagittal plane ankle angle (varied between 150° and 70°) on isometric ankle moments, in both magnitude and direction, exerted by active rat triceps surae muscles, (ii) to assess ankle moment summation between those muscles for a range of ankle angles and (iii) to assess effects of sagittal plane ankle angle and muscle activation on Achilles tendon length. At each ankle angle, soleus (SO) and gastrocnemius (GA) muscles were first excited separately to assess ankle-angle moment characteristics and subsequently both muscles were excited simultaneously to investigate moment summation. The magnitude of ankle moment exerted by SO and GA, the SO direction in the transverse and sagittal planes, and the GA direction in the transverse plane were significantly affected by ankle angle. SO moment direction in the frontal and sagittal planes were significantly different from that of GA. Nonlinear magnitude summation varied between 0.6±2.9% and −3.6±2.9%, while the nonlinear direction summation varied between 0.3±0.4° and −0.4±0.7° in the transverse plane, between 0.5±0.4° and 0.1±0.4° in the frontal plane, and between 3.0±7.9° and 0.3±2.3° in the sagittal plane. Changes in tendon length caused by SO contraction were significantly lower than those during contraction of GA and GA+SO simultaneously. Thus, moments exerted by GA and SO sum nonlinearly both in the magnitude and direction. The limited degree of nonlinear summation may be explained by different mechanisms acting in opposite directions. PMID:25360524
Hoekstra, P F; Braune, B M; Wong, C S; Williamson, M; Elkin, B; Muir, D C G
2003-11-01
Wolverines (Gulo gulo) are circumpolar omnivores that live throughout the alpine and arctic tundra ecosystem. Wolverine livers were collected at Kugluktuk (Coppermine), NU (n=12) in the western Canadian Arctic to report, for the first time, the residue patterns of persistent organochlorine contaminants (OCs) in this species. The enantiomer fractions (EFs) of several chiral OCs, including PCB atropisomers, in wolverines were also determined. Results were compared to OC concentrations and EFs of chiral contaminants in arctic fox (Alopex lagopus) from Ulukhaqtuuq (Holman), NT (n=20); a closely related species that scavenges the marine and terrestrial arctic environment. The rank order of hepatic concentrations for sum ( summation operator ) OC groups in wolverines were polychlorinated biphenyls ( summation operator PCB)>chlordane-related components ( summation operator CHLOR)>DDT-related compounds ( summation operator DDT)>hexachlorocyclohexane isomers ( summation operator HCHs). The most abundant OC analytes detected in wolverine liver were PCB-153, PCB-180, and oxychlordane (OXY). Wolverine age and gender did not influence OC concentrations, which were comparable to lipid-normalized values in arctic fox. The EFs of several chiral OCs (alpha-HCH, cis- and trans-chlordane, OXY, heptachlor exo-epoxide) and PCB atropisomers (PCB-136, 149) were nonracemic in arctic fox and wolverine liver and similar to those previously calculated in arctic fox and polar bears from Iceland and the Canadian Arctic. Results suggest that these species have similar ability to biotransform OCs. As well, contaminant profiles suggest that terrestrial mammals do not represent the major source of OC exposure to wolverines and that wolverines are scavenging more contaminated prey items, such as marine mammals. While summation operator PCB did not exceed the concentrations associated with mammalian reproductive impairment, future research is required to properly evaluate the potential affect of other OCs on the overall health of wolverines.
George, Steven Z; Wittmer, Virgil T; Fillingim, Roger B; Robinson, Michael E
2006-03-01
Quantitative sensory testing has demonstrated a promising link between experimentally determined pain sensitivity and clinical pain. However, previous studies of quantitative sensory testing have not routinely considered the important influence of psychological factors on clinical pain. This study investigated whether measures of thermal pain sensitivity (temporal summation, first pulse response, and tolerance) contributed to clinical pain reports for patients with chronic low back pain, after controlling for depression or fear-avoidance beliefs about work. Consecutive patients (n=27) with chronic low back pain were recruited from an interdisciplinary pain rehabilitation program in Jacksonville, FL. Patients completed validated self-report questionnaires for depression, fear-avoidance beliefs, clinical pain intensity, and clinical pain related disability. Patients also underwent quantitative sensory testing from previously described protocols to determine thermal pain sensitivity (temporal summation, first pulse response, and tolerance). Hierarchical regression models investigated the contribution of depression and thermal pain sensitivity to clinical pain intensity, and fear-avoidance beliefs and thermal pain sensitivity to clinical pain related disability. None of the measures of thermal pain sensitivity contributed to clinical pain intensity after controlling for depression. Temporal summation of evoked thermal pain significantly contributed to clinical pain disability after controlling for fear-avoidance beliefs about work. Measures of thermal pain sensitivity did not contribute to pain intensity, after controlling for depression. Fear-avoidance beliefs about work and temporal summation of evoked thermal pain significantly influenced pain related disability. These factors should be considered as potential outcome predictors for patients with work-related low back pain. This study supported the neuromatrix theory of pain for patients with CLBP, as cognitive-evaluative factor contributed to pain perception, and cognitive-evaluative and sensory-discriminative factors uniquely contributed to an action program in response to chronic pain. Future research will determine if a predictive model consisting of fear-avoidance beliefs and temporal summation of evoked thermal pain has predictive validity for determining clinical outcome in rehabilitation or vocational settings.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Exploring the Use of Audience Response Systems in Secondary School Science Classrooms
NASA Astrophysics Data System (ADS)
Kay, Robin; Knaack, Liesel
2009-10-01
An audience response systems (ARS) allows students to respond to multiple choice questions using remote control devices. Once the feedback is collected and displayed, the teacher and students discuss misconceptions and difficulties experienced. ARSs have been extremely popular and effective in higher education science classrooms, although almost no research has been done at the secondary school level. The purpose of this study was to conduct a detailed formative analysis of the benefits, challenges, and use of ARSs from the perspective of 213 secondary school science students. Perceived benefits were increased student involvement (engagement, participation, and attention) and effective formative assessment of student understanding. Perceived challenges included decreased student involvement and learning when ARSs were used for summative assessment, occasional technological malfunctions, resistance to using a new method of learning, and increased stress due to time constraints when responding to questions. Finally, students rated the use of ARSs significantly higher when it was used for formative as opposed to summative assessment.
Herbert, Matthew S.; Goodin, Burel R.; Pero, Samuel T.; Schmidt, Jessica K.; Sotolongo, Adriana; Bulls, Hailey W.; Glover, Toni L.; King, Christopher D.; Sibille, Kimberly T.; Cruz-Almeida, Yenisel; Staud, Roland; Fessler, Barri J.; Bradley, Laurence A.; Fillingim, Roger B.
2014-01-01
Background Pain hypervigilance is an important aspect of the fear-avoidance model of pain that may help explain individual differences in pain sensitivity among persons with knee osteoarthritis (OA). Purpose The purpose of this study was to examine the contribution of pain hypervigilance to clinical pain severity and experimental pain sensitivity in persons with symptomatic knee OA. Methods We analyzed cross-sectional data from 168 adults with symptomatic knee OA. Quantitative sensory testing was used to measure sensitivity to heat pain, pressure pain, and cold pain, as well as temporal summation of heat pain, a marker of central sensitization. Results Pain hypervigilance was associated with greater clinical pain severity, as well as greater pressure pain. Pain hypervigilance was also a significant predictor of temporal summation of heat pain. Conclusions Pain hypervigilance may be an important contributor to pain reports and experimental pain sensitivity among persons with knee OA. PMID:24352850
Is temporal summation of pain and spinal nociception altered during normal aging?
Marouf, Rafik; Piché, Mathieu; Rainville, Pierre
2015-01-01
Abstract This study examines the effect of normal aging on temporal summation (TS) of pain and the nociceptive flexion reflex (RIII). Two groups of healthy volunteers, young and elderly, received transcutaneous electrical stimulation applied to the right sural nerve to assess pain and the nociceptive flexion reflex (RIII-reflex). Stimulus intensity was adjusted individually to 120% of RIII-reflex threshold, and shocks were delivered as a single stimulus or as a series of 5 stimuli to assess TS at 5 different frequencies (0.17, 0.33, 0.66, 1, and 2 Hz). This study shows that robust TS of pain and RIII-reflex is observable in individuals aged between 18 and 75 years and indicates that these effects are comparable between young and older individuals. These results contrast with some previous findings and imply that at least some pain regulatory processes, including TS, may not be affected by normal aging, although this may vary depending on the method. PMID:26058038
Generalised summation-by-parts operators and variable coefficients
NASA Astrophysics Data System (ADS)
Ranocha, Hendrik
2018-06-01
High-order methods for conservation laws can be highly efficient if their stability is ensured. A suitable means mimicking estimates of the continuous level is provided by summation-by-parts (SBP) operators and the weak enforcement of boundary conditions. Recently, there has been an increasing interest in generalised SBP operators both in the finite difference and the discontinuous Galerkin spectral element framework. However, if generalised SBP operators are used, the treatment of the boundaries becomes more difficult since some properties of the continuous level are no longer mimicked discretely - interpolating the product of two functions will in general result in a value different from the product of the interpolations. Thus, desired properties such as conservation and stability are more difficult to obtain. Here, new formulations are proposed, allowing the creation of discretisations using general SBP operators that are both conservative and stable. Thus, several shortcomings that might be attributed to generalised SBP operators are overcome (cf. Nordström and Ruggiu (2017) [38] and Manzanero et al. (2017) [39]).
Literacy Block: Meeting the Needs of All Learners; A Summative Program Evaluation
ERIC Educational Resources Information Center
Pelletier, Nancy L.
2011-01-01
This summative program evaluation study investigated the Response to Intervention (RTI) pilot literacy block program that was implemented in first and second grade classrooms in a small southeastern suburban school. All 111 students in the first and second grade were involved in this RTI model during the 2010-2011 school year including special…
ERIC Educational Resources Information Center
Surgenor, P.W.G.
2013-01-01
Summative student evaluation of teaching (SET) is a contentious process, but given the increasing emphasis on quality and accountability, as well as national and international calls for centralised student feedback systems, is likely to become an inevitable aspect of teaching. This research aimed to clarify academics' attitudes to SET in a large…
ERIC Educational Resources Information Center
Agarwal, Pooja K.; McDaniel, Mark A.; Thomas, Ruthann C.; McDermott, Kathleen B.; Roediger, Henry L., III
2011-01-01
The use of summative testing to evaluate students' acquisition, retention, and transfer of instructed material is a fundamental aspect of educational practice and theory. However, a substantial basic literature has established that testing is not a neutral event--testing can also enhance and modify memory (Carpenter & DeLosh, 2006; Hogan &…
Gender Perspectives on Spatial Tasks in a National Assessment: A Secondary Data Analysis
ERIC Educational Resources Information Center
Logan, Tracy; Lowrie, Tom
2017-01-01
Most large-scale summative assessments present results in terms of cumulative scores. Although such descriptions can provide insights into general trends over time, they do not provide detail of how students solved the tasks. Less restrictive access to raw data from these summative assessments has occurred in recent years, resulting in…
ERIC Educational Resources Information Center
Human Resources and Skills Development Canada, 2004
2004-01-01
This report provides a summary of six summative evaluation studies that were implemented and completed between 1999 and 2002. The evaluations were conducted on three different streams of Canada's Youth Employment Strategy (YES). The Youth Employment Strategy was introduced by the federal government in 1997 to address employment related challenges…
ERIC Educational Resources Information Center
Zhao, Yue; Huen, Jenny M. Y.; Chan, Y. W.
2017-01-01
This study pioneers a Rasch scoring approach and compares it to a conventional summative approach for measuring longitudinal gains in student learning. In this methodological note, our proposed methodology is demonstrated using an example of rating scales in a student survey as part of a higher education outcome assessment. Such assessments have…
Affordances and Constraints of Using the Socio-Political Debate for Authentic Summative Assessment
ERIC Educational Resources Information Center
Anker-Hansen, Jens; Andrée, Maria
2015-01-01
This article reports from an empirical study on the affordances and constraints for using staged socio-political debates for authentic summative assessment of scientific literacy. The article focuses on conditions for student participation and what purposes emerge in student interaction in a socio-political debate. As part of the research project,…
A Collaborative Data Chat: Teaching Summative Assessment Data Use in Pre-Service Teacher Education
ERIC Educational Resources Information Center
Piro, Jody S.; Dunlap, Karen; Shutt, Tammy
2014-01-01
As the quality of educational outputs has been problematized, accountability systems have driven reform based upon summative assessment data. These policies impact the ways that educators use data within schools and subsequently, how teacher education programs may adjust their curricula to teach data-driven decision-making to inform instruction.…
Using IRT Trait Estimates versus Summated Scores in Predicting Outcomes
ERIC Educational Resources Information Center
Xu, Ting; Stone, Clement A.
2012-01-01
It has been argued that item response theory trait estimates should be used in analyses rather than number right (NR) or summated scale (SS) scores. Thissen and Orlando postulated that IRT scaling tends to produce trait estimates that are linearly related to the underlying trait being measured. Therefore, IRT trait estimates can be more useful…
Formative plus Summative Assessment in Large Undergraduate Courses: Why Both?
ERIC Educational Resources Information Center
Glazer, Nirit
2014-01-01
One of the main challenges in large undergraduate courses in higher education, especially those with multiple-sections, is to monitor what is going on at the section level and to track the consistency across sections in both instruction and grading. In this paper, it can be argued that a combination of both formative and summative assessment is…
ERIC Educational Resources Information Center
Castro-Peet, Alma Sandra
2017-01-01
Purpose: This study explored a technological contribution to education made by the Defense Language Institute Foreign Language Center (DLIFLC) in the formative assessment field. The purpose of this quantitative correlational study was to identify the relationship between online formative (Online Diagnostic Assessment; ODA) and summative (Defense…
ERIC Educational Resources Information Center
Marchand, Gwen C.; Furrer, Carrie J.
2014-01-01
This study explored the relationships among formative curriculum-based measures of reading (CBM-R), student engagement as an extra-academic indicator of student motivation, and summative performance on a high-stakes reading assessment. A diverse sample of third-, fourth-, and fifth-grade students and their teachers responded to questionnaires…
Toward a Summative System for the Assessment of Teaching Quality in Higher Education
ERIC Educational Resources Information Center
Murphy, Timothy; MacLaren, Iain; Flynn, Sharon
2009-01-01
This study examines various aspects of an effective teaching evaluation system. In particular, reference is made to the potential of Fink's (2008) four main dimensions of teaching as a summative evaluation model for effective teaching and learning. It is argued that these dimensions can be readily accommodated in a Teaching Portfolio process. The…
Including Students with Disabilities in Common Non-Summative Assessments. NCEO Brief. Number 6
ERIC Educational Resources Information Center
National Center on Educational Outcomes, 2012
2012-01-01
Inclusive large-scale assessments have become the norm in states across the U.S. Participation rates of students with disabilities in these assessments have increased dramatically since the mid-1990s. As consortia of states move toward the development and implementation of assessment systems that include both non-summative assessments and…
Disentangling Instructional Roles: The Case of Teaching and Summative Assessment
ERIC Educational Resources Information Center
Sasanguie, Delphine; Elen, Jan; Clarebout, Geraldine; Van den Noortgate, Wim; Vandenabeele, Joke; De Fraine, Bieke
2011-01-01
While in some higher education contexts a separation of teaching and summative assessment is assumed to be self-evident, in other contexts the opposite is regarded to be obvious. In this article the different arguments supporting either position are analyzed. Based on a systematic literature review, arguments for and against are classified at the…
Multi-input and binary reproducible, high bandwidth floating point adder in a collective network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Eisley, Noel A; Heidelberger, Philip
To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to themore » collective logic device and receive outputs only once from the collective logic device.« less
NASA Astrophysics Data System (ADS)
Levin, E.; Prygarin, A.
2008-02-01
In this paper we address two problems in pomeron calculus in zero transverse dimensions: the summation of the pomeron loops and the calculation of the processes of multiparticle generation. We introduce a new generating functional for these processes and obtain the evolution equation for it. We argue that in the kinematic range given by 1 ≪ln(1/α_{text{S}}
Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies
NASA Astrophysics Data System (ADS)
Chen, Kewei; Reiman, E. M.; Lawson, M.; Yun, Lang-sheng; Bandy, D.; Palant, A.
1996-12-01
While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control (baseline) scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0-60 s after radiotracer administration, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20-80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the application of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted the authors to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.
Turner, Joshua J.; Dakovski, Georgi L.; Hoffmann, Matthias C.; ...
2015-04-11
This paper describes the development of new instrumentation at the Linac Coherent Light Source for conducting THz excitation experiments in an ultra high vacuum environment probed by soft X-ray diffraction. This consists of a cantilevered, fully motorized mirror system which can provide 600 kV cm⁻¹ electric field strengths across the sample and an X-ray detector that can span the full Ewald sphere with in-vacuum motion. The scientific applications motivated by this development, the details of the instrument, and spectra demonstrating the field strengths achieved using this newly developed system are discussed.
Great Red Spot Rotation (animation)
2017-12-11
Winds around Jupiter's Great Red Spot are simulated in this JunoCam view that has been animated using a model of the winds there. The wind model, called a velocity field, was derived from data collected by NASA's Voyager spacecraft and Earth-based telescopes. NASA's Juno spacecraft acquired the original, static view during passage over the spot on July 10, 2017. Citizen scientists Gerald Eichstädt and Justin Cowart turned the JunoCam data into a color image mosaic. Juno scientists Shawn Ewald and Andrew Ingersoll applied the velocity data to the image to produce a looping animation. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA22178
Laser scattering induced holograms in lithium niobate. [observation of diffraction cones
NASA Technical Reports Server (NTRS)
Magnusson, R.; Gaylord, T. K.
1974-01-01
A 3.0-mm thick poled single crystal of lithium niobate doped with 0.1 mole% iron was exposed to a single beam and then to two intersecting beams of an argon ion laser operating at 515-nm wavelength. Laser scattering induced holograms were thus written and analyzed. The presence of diffraction cones was observed and is shown to result from the internally recorded interference pattern resulting from the interference of the original incident laser beam with light scattered from material inhomogeneities. This phenomenon is analyzed using Ewald sphere construction techniques which reveal the geometrical relationships existing for the diffraction cones.
Insights into horizontal canal benign paroxysmal positional vertigo from a human case report.
Aron, Margaret; Bance, Manohar
2013-12-01
For horizontal canal benign paroxysmal positional vertigo, determination of the pathologic side is difficult and based on many physiological assumptions. This article reports findings on a patient who had one dysfunctional inner ear and who presented with horizontal canal benign paroxysmal positional vertigo, giving us a relatively pure model for observing nystagmus arising in a subject in whom the affected side is known a priori. It is an interesting human model corroborating theories of nystagmus generation in this pathology and also serves to validate Ewald's second law in a living human subject. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
Portfolio: a comprehensive method of assessment for postgraduates in oral and maxillofacial surgery.
Kadagad, Poornima; Kotrashetti, S M
2013-03-01
Post graduate learning and assessment is an important responsibility of an academic oral and maxillofacial surgeon. The current method of assessment for post graduate training include formative evaluation in the form of seminars, case presentations, log books and infrequently conducted end of year theory exams. End of the course theory and practical examination is a summative evaluation which awards the degree to the student based on grades obtained. Oral and maxillofacial surgery is mainly a skill based specialty and deliberate practice enhances skill. But the traditional system of assessment of post graduates emphasizes their performance on the summative exam which fails to evaluate the integral picture of the student throughout the course. Emphasis on competency and holistic growth of the post graduate student during training in recent years has lead to research and evaluation of assessment methods to quantify students' progress during training. Portfolio method of assessment has been proposed as a potentially functional method for post graduate evaluation. It is defined as a collection of papers and other forms of evidence that learning has taken place. It allows the collation and integration of evidence on competence and performance from different sources to gain a comprehensive picture of everyday practice. The benefits of portfolio assessment in health professions education are twofold: it's potential to assess performance and its potential to assess outcomes, such as attitudes and professionalism that are difficult to assess using traditional instruments. This paper is an endeavor for the development of portfolio method of assessment for post graduate student in oral and maxillofacial surgery.
ERIC Educational Resources Information Center
Luckin, Rosemary; Clark, Wilma; Avramides, Katerina; Hunter, Jade; Oliver, Martin
2017-01-01
In this paper we review the literature on teacher inquiry (TI) to explore the possibility that this process can equip teachers to investigate students' learning as a step towards the process of formative assessment. We draw a distinction between formative assessment and summative forms of assessment [CRELL. (2009). The transition to computer-based…
ERIC Educational Resources Information Center
Hixson, Nate K.; Ravitz, Jason; Whisman, Andy
2012-01-01
From 2008 to 2010, project-based learning (PBL) was a major focus of the Teacher Leadership Institute (TLI), undertaken by the West Virginia Department of Education (WVDE), as a method for teaching 21st century skills. Beginning in January 2011, a summative evaluation was conducted to investigate the effect of PBL implementation on teachers'…
Certain topological properties and duals of the domain of a triangle matrix in a sequence space
NASA Astrophysics Data System (ADS)
Altay, Bilâl; Basar, Feyzi
2007-12-01
The matrix domain of the particular limitation methods Cesàro, Riesz, difference, summation and Euler were studied by several authors. In the present paper, certain topological properties and [beta]- and [gamma]-duals of the domain of a triangle matrix in a sequence space have been examined as an application of the characterization of the related matrix classes.
Adding Resistances and Capacitances in Introductory Electricity
NASA Astrophysics Data System (ADS)
Efthimiou, C. J.; Llewellyn, R. A.
2005-09-01
All introductory physics textbooks, with or without calculus, cover the addition of both resistances and capacitances in series and in parallel as discrete summations. However, none includes problems that involve continuous versions of resistors in parallel or capacitors in series. This paper introduces a method for solving the continuous problems that is logical, straightforward, and within the mathematical preparation of students at the introductory level.
Simulating chemical reactions in ionic liquids using QM/MM methodology.
Acevedo, Orlando
2014-12-18
The use of ionic liquids as a reaction medium for chemical reactions has dramatically increased in recent years due in large part to the numerous reported advances in catalysis and organic synthesis. In some extreme cases, ionic liquids have been shown to induce mechanistic changes relative to conventional solvents. Despite the large interest in the solvents, a clear understanding of the molecular factors behind their chemical impact is largely unknown. This feature article reviews our efforts developing and applying mixed quantum and molecular mechanical (QM/MM) methodology to elucidate the microscopic details of how these solvents operate to enhance rates and alter mechanisms for industrially and academically important reactions, e.g., Diels-Alder, Kemp eliminations, nucleophilic aromatic substitutions, and β-eliminations. Explicit solvent representation provided the medium dependence of the activation barriers and atomic-level characterization of the solute-solvent interactions responsible for the experimentally observed "ionic liquid effects". Technical advances are also discussed, including a linear-scaling pairwise electrostatic interaction alternative to Ewald sums, an efficient polynomial fitting method for modeling proton transfers, and the development of a custom ionic liquid OPLS-AA force field.
Visual motion modulates pattern sensitivity ahead, behind, and beside motion
Arnold, Derek H.; Marinovic, Welber; Whitney, David
2014-01-01
Retinal motion can modulate visual sensitivity. For instance, low contrast drifting waveforms (targets) can be easier to detect when abutting the leading edges of movement in adjacent high contrast waveforms (inducers), rather than the trailing edges. This target-inducer interaction is contingent on the adjacent waveforms being consistent with one another – in-phase as opposed to out-of-phase. It has been suggested that this happens because there is a perceptually explicit predictive signal at leading edges of motion that summates with low contrast physical input – a ‘predictive summation’. Another possible explanation is a phase sensitive ‘spatial summation’, a summation of physical inputs spread across the retina (not predictive signals). This should be non-selective in terms of position – it should be evident at leading, adjacent, and at trailing edges of motion. To tease these possibilities apart, we examined target sensitivity at leading, adjacent, and trailing edges of motion. We also examined target sensitivity adjacent to flicker, and for a stimulus that is less susceptible to spatial summation, as it sums to grey across a small retinal expanse. We found evidence for spatial summation in all but the last condition. Finally, we examined sensitivity to an absence of signal at leading and trailing edges of motion, finding greater sensitivity at leading edges. These results are inconsistent with the existence of a perceptually explicit predictive signal in advance of drifting waveforms. Instead, we suggest that phase-contingent target-inducer modulations of sensitivity are explicable in terms of a directionally modulated spatial summation. PMID:24699250
Baker, Christa A.
2014-01-01
A variety of synaptic mechanisms can contribute to single-neuron selectivity for temporal intervals in sensory stimuli. However, it remains unknown how these mechanisms interact to establish single-neuron sensitivity to temporal patterns of sensory stimulation in vivo. Here we address this question in a circuit that allows us to control the precise temporal patterns of synaptic input to interval-tuned neurons in behaviorally relevant ways. We obtained in vivo intracellular recordings under multiple levels of current clamp from midbrain neurons in the mormyrid weakly electric fish Brienomyrus brachyistius during stimulation with electrosensory pulse trains. To reveal the excitatory and inhibitory inputs onto interval-tuned neurons, we then estimated the synaptic conductances underlying responses. We found short-term depression in excitatory and inhibitory pathways onto all interval-tuned neurons. Short-interval selectivity was associated with excitation that depressed less than inhibition at short intervals, as well as temporally summating excitation. Long-interval selectivity was associated with long-lasting onset inhibition. We investigated tuning after separately nullifying the contributions of temporal summation and depression, and found the greatest diversity of interval selectivity among neurons when both mechanisms were at play. Furthermore, eliminating the effects of depression decreased sensitivity to directional changes in interval. These findings demonstrate that variation in depression and summation of excitation and inhibition helps to establish tuning to behaviorally relevant intervals in communication signals, and that depression contributes to neural coding of interval sequences. This work reveals for the first time how the interplay between short-term plasticity and temporal summation mediates the decoding of temporal sequences in awake, behaving animals. PMID:25339741
Weissman-Fogel, Irit; Granovsky, Yelena; Crispel, Yonathan; Ben-Nun, Alon; Best, Lael Anson; Yarnitsky, David; Granot, Michal
2009-06-01
Recent evidence points to an association between experimental pain measures obtained preoperatively and acute postoperative pain (POP). We hypothesized that pain temporal summation (TS) might be an additional predictor for POP insofar as it represents the neuroplastic changes that occur in the central nervous system following surgery. Therefore, a wide range of psychophysical tests (TS to heat and mechanical repetitive stimuli, pain threshold, and suprathreshold pain estimation) and personality tests (pain catastrophizing and anxiety levels) were administered prior to thoracotomy in 84 patients. POP ratings were evaluated on the 2nd and 5th days after surgery at rest (spontaneous pain) and in response to activity (provoked pain). Linear regression models revealed that among all assessed variables, enhanced TS and higher pain scores for mechanical stimulation were significantly associated with greater provoked POP intensity (overall r2 = 0.225, P = .008). Patients who did not demonstrate TS to both modalities reported lower scores of provoked POP as compared with patients who demonstrated TS in response to at least 1 modality (F = 4.59 P = .013). Despite the moderate association between pain catastrophizing and rest POP, none of the variables predicted the spontaneous POP intensity. These findings suggest that individual susceptibility toward a greater summation response may characterize patients who are potentially vulnerable to augmented POP. This study proposed the role of pain temporal summation assessed preoperatively as a significant psychophysical predictor for acute postoperative pain intensity. The individual profile of enhanced pain summation is associated with the greater likelihood of higher postoperative pain scores.
Tiwari, Vikram; Kumar, Avinash B
2018-01-01
The current system of summative multi-rater evaluations and standardized tests to determine readiness to graduate from critical care fellowships has limitations. We sought to pilot the use of data envelopment analysis (DEA) to assess what aspects of the fellowship program contribute the most to an individual fellow's success. DEA is a nonparametric, operations research technique that uses linear programming to determine the technical efficiency of an entity based on its relative usage of resources in producing the outcome. Retrospective cohort study. Critical care fellows (n = 15) in an Accreditation Council for Graduate Medical Education (ACGME) accredited fellowship at a major academic medical center in the United States. After obtaining institutional review board approval for this retrospective study, we analyzed the data of 15 anesthesiology critical care fellows from academic years 2013-2015. The input-oriented DEA model develops a composite score for each fellow based on multiple inputs and outputs. The inputs included the didactic sessions attended, the ratio of clinical duty works hours to the procedures performed (work intensity index), and the outputs were the Multidisciplinary Critical Care Knowledge Assessment Program (MCCKAP) score and summative evaluations of fellows. A DEA efficiency score that ranged from 0 to 1 was generated for each of the fellows. Five fellows were rated as DEA efficient, and 10 fellows were characterized in the DEA inefficient group. The model was able to forecast the level of effort needed for each inefficient fellow, to achieve similar outputs as their best performing peers. The model also identified the work intensity index as the key element that characterized the best performers in our fellowship. DEA is a feasible method of objectively evaluating peer performance in a critical care fellowship beyond summative evaluations alone and can potentially be a powerful tool to guide individual performance during the fellowship.
NASA Astrophysics Data System (ADS)
Sokolov, A. K.
2017-09-01
This article presents the technique of assessing the maximum allowable (standard) discharge of waste waters with several harmful substances into a water reservoir. The technique makes it possible to take into account the summation of their effect provided that the limiting harmful indices are the same. The expressions for the determination of the discharge limit of waste waters have been derived from the conditions of admissibility of the effect of several harmful substances on the waters of a reservoir. Mathematical conditions of admissibility of the effect of wastewaters on a reservoir are given for the characteristic combinations of limiting harmful indices and hazard classes of several substances. The conditions of admissibility of effects are presented in the form of logical products of the sums of relative concentrations that should not exceed the value of 1. It is shown that the calculation of the process of wastewater dilution in a flowing water reservoir is possible only on the basis of a numerical method to assess the wastewater discharge limit. An example of the numerical calculation of the standard limit of industrial enterprise wastewater discharges that contain polysulfide oil, flocculant VPK-101, and fungicide captan is given to test this method. In addition to these three harmful substances, the water reservoir also contained a fourth substance, namely, Zellek-Super herbicide, above the waste discharge point. The summation of the harmful effect was taken into account for VPK-101, captan, and Zellek-Super. The reliability of the technique was tested by the calculation of concentrations of the four substances in the control point of the flowing reservoir during the estimated maximum allowable wastewater discharge. It is shown that the value of the maximum allowable discharge limit was almost two times higher for the example under consideration, taking into account that the effect of harmful substances was unidirectional, which provides a higher level of environmental safety for them.
ERIC Educational Resources Information Center
Walker, Marquita
2013-01-01
This summative evaluation is the result of two years' of data reflecting the impact of an ethics class in terms of students' ethical decision-making. The research compares aggregate responses from scenario-based pre- and post-survey open-ended survey questions designed to measure changes in ethical decision-making by comparing students' cognitive…
Summative Evaluation of Reading for a Reason: A Reading Series for Grades 7 and 8.
ERIC Educational Resources Information Center
Webb, Norman L.
A summative evaluation of the instructional television series "Reading for a Reason" was conducted during the spring of 1982 as part of the premier showing of the series over the Wisconsin Educational Television Network. The series consisted of eight programs designed to teach skills for content area reading to seventh and eighth grade…
ERIC Educational Resources Information Center
Jones, Joanna; Gaffney-Rhys, Ruth; Jones, Edward
2014-01-01
This article presents a synthesis of previous ideas relating to student evaluation of teaching (SET) results in higher education institutions (HEIs), with particular focus upon possible validity issues and matters that HEI decision-makers should consider prior to interpreting survey results and using them summatively. Furthermore, the research…
Should Global Items on Student Rating Scales Be Used for Summative Decisions?
ERIC Educational Resources Information Center
Berk, Ronald A.
2013-01-01
One of the simplest indicators of teaching or course effectiveness is student ratings on one or more global items from the entire rating scale. That approach seems intuitively sound and easy to use. Global items have even been recommended by a few researchers to get a quick-read, at-a-glance summary for summative decisions about faculty. The…
ERIC Educational Resources Information Center
Carruthers, Clare; McCarron, Brenda; Bolan, Peter; Devine, Adrian; McMahon-Beattie, Una; Burns, Amy
2015-01-01
This study aims to ascertain student and staff attitudes to and perceptions of audio feedback made available via the virtual learning environment (VLE) for summative assessment. Consistent with action research and reflective practice, this study identifies best practice, highlighting issues in relation to implementation with the intention of…
Reduction in Cheating Following a Forensic Investigation on a Statewide Summative Assessment
ERIC Educational Resources Information Center
McClintock, Joseph C.
2016-01-01
This study examined indicators of cheating on a statewide summative assessment for grades 3--8 over a 4-year period. Between year 2 and year 3 of this study, the state launched an aggressive, highly publicized investigation into cheating by educators. The basis of the investigation was an erasure analysis. The current study found that the number…
ERIC Educational Resources Information Center
Brink, Carole Sanger
2011-01-01
In 2007, Georgia developed a comprehensive framework to define what students need to know. One component of this framework emphasizes the use of both formative and summative assessments as part of an integral and specific component of the teachers. performance evaluation. Georgia administers the Criterion-Referenced Competency Test (CRCT) to every…
ERIC Educational Resources Information Center
Looney, Janet W.
2011-01-01
A long-held ambition for many educators and assessment experts has been to integrate summative and formative assessments so that data from external assessments used for system monitoring may also be used to shape teaching and learning in classrooms. In turn, classroom-based assessments may provide valuable data for decision makers at school and…
ERIC Educational Resources Information Center
Klapp, Alli
2018-01-01
The purpose of the study was to investigate if academic and social self-concept and motivation to improve in academic school subjects mediated the negative effect of summative assessment (grades) for low-ability students' achievement in compulsory school. In two previous studies, summative assessment (grading) was found to have a differentiating…
The Use of Formative Online Quizzes to Enhance Class Preparation and Scores on Summative Exams
ERIC Educational Resources Information Center
Dobson, John L.
2008-01-01
Online quizzes were introduced into an undergraduate Exercise Physiology course to encourage students to read ahead and think critically about the course material before coming to class. The purpose of the study was to determine if the use of the online quizzes was associated with improvements in summative exam scores and if the online quizzes…
Subiaul, Francys; Krajkowski, Edward; Price, Elizabeth E; Etz, Alexander
2015-01-01
Children are exceptional, even 'super,' imitators but comparatively poor independent problem-solvers or innovators. Yet, imitation and innovation are both necessary components of cumulative cultural evolution. Here, we explored the relationship between imitation and innovation by assessing children's ability to generate a solution to a novel problem by imitating two different action sequences demonstrated by two different models, an example of imitation by combination, which we refer to as "summative imitation." Children (N = 181) from 3 to 5 years of age and across three experiments were tested in a baseline condition or in one of six demonstration conditions, varying in the number of models and opening techniques demonstrated. Across experiments, more than 75% of children evidenced summative imitation, opening both compartments of the problem box and retrieving the reward hidden in each. Generally, learning different actions from two different models was as good (and in some cases, better) than learning from 1 model, but the underlying representations appear to be the same in both demonstration conditions. These results show that summative imitation not only facilitates imitation learning but can also result in new solutions to problems, an essential feature of innovation and cumulative culture.
Input integration around the dendritic branches in hippocampal dentate granule cells.
Kamijo, Tadanobu Chuyo; Hayakawa, Hirofumi; Fukushima, Yasuhiro; Kubota, Yoshiyuki; Isomura, Yoshikazu; Tsukada, Minoru; Aihara, Takeshi
2014-08-01
Recent studies have shown that the dendrites of several neurons are not simple translators but are crucial facilitators of excitatory postsynaptic potential (EPSP) propagation and summation of synaptic inputs to compensate for inherent voltage attenuation. Granule cells (GCs)are located at the gateway for valuable information arriving at the hippocampus from the entorhinal cortex. However, the underlying mechanisms of information integration along the dendrites of GCs in the hippocampus are still unclear. In this study, we investigated the input integration around dendritic branches of GCs in the rat hippocampus. We applied differential spatiotemporal stimulations to the dendrites using a high-speed glutamate-uncaging laser. Our results showed that when two sites close to and equidistant from a branching point were simultaneously stimulated, a nonlinear summation of EPSPs was observed at the soma. In addition, nonlinear summation (facilitation) depended on the stimulus location and was significantly blocked by the application of a voltage-dependent Ca(2+) channel antagonist. These findings suggest that the nonlinear summation of EPSPs around the dendritic branches of hippocampal GCs is a result of voltage-dependent Ca(2+) channel activation and may play a crucial role in the integration of input information.
Objective evaluation of binaural summation through acoustic reflex measures.
Rawool, Vishakha W; Parrill, Madaline
2018-02-12
A previous study [Rawool, V. W. (2016). Auditory processing deficits: Assessment and intervention. New York, NY: Thieme Medical Publishers, Inc., pp. 186-187] demonstrated objective assessment of binaural summation through right contralateral acoustic reflex thresholds (ARTs) in women. The current project examined if previous findings could be generalised to men and to the left ear. Cross-sectional. Sixty individuals participated in the study. Left and right contralateral ARTs were obtained in two conditions. In the alternated condition, the probe tone presentation was alternated with the presentation of the reflex activating clicks. In the simultaneous condition, the probe tone and the clicks were presented simultaneously. Binaural summation was calculated by subtracting the ARTs obtained in the simultaneous condition from the ARTs obtained in the alternated condition. MANOVA on ARTs revealed no significant gender or ear effects. The ARTs were significantly lower/better in the simultaneous condition compared to the alternated condition. Binaural summation was 4 dB or higher in 88% of the ears and 6 dB or higher in 76% of ears. Stimulation of six out of the total 120 (0.5%) ears resulted in worse thresholds in the simultaneous condition compared with the alternating condition, suggesting binaural interference.
Guillot, Martin; Taylor, Polly M.; Rialland, Pascale; Klinck, Mary P.; Martel-Pelletier, Johanne; Pelletier, Jean-Pierre; Troncy, Eric
2014-01-01
In cats, osteoarthritis causes significant chronic pain. Chronicity of pain is associated with changes in the central nervous system related to central sensitization, which have to be quantified. Our objectives were 1) to develop a quantitative sensory testing device in cats for applying repetitive mechanical stimuli that would evoke temporal summation; 2) to determine the sensitivity of this test to osteoarthritis-associated pain, and 3) to examine the possible correlation between the quantitative sensory testing and assessment using other pain evaluation methods. We hypothesized that mechanical sub-threshold repetitive stimuli would evoke temporal summation, and that cats with osteoarthritis would show a faster response. A blinded longitudinal study was performed in 4 non-osteoarthritis cats and 10 cats with naturally occurring osteoarthritis. Quantification of chronic osteoarthritis pain-related disability was performed over a two week period using peak vertical force kinetic measurement, motor activity intensity assessment and von Frey anesthesiometer-induced paw withdrawal threshold testing. The cats afflicted with osteoarthritis demonstrated characteristic findings consistent with osteoarthritis-associated chronic pain. After a 14-day acclimation period, repetitive mechanical sub-threshold stimuli were applied using a purpose-developed device. Four stimulation profiles of predetermined intensity, duration and time interval were applied randomly four times during a four-day period. The stimulation profiles were different (P<0.001): the higher the intensity of the stimulus, the sooner it produced a consistent painful response. The cats afflicted with osteoarthritis responded more rapidly than cats osteoarthritis free (P = 0.019). There was a positive correlation between the von Frey anesthesiometer-induced paw withdrawal threshold and the response to stimulation profiles #2 (2N/0.4 Hz) and #4 (2N/0.4 Hz): Rhos = 0.64 (P = 0.01) and 0.63 (P = 0.02) respectively. This study is the first report of mechanical temporal summation in awake cats. Our results suggest that central sensitization develops in cats with naturally occurring osteoarthritis, providing an opportunity to improve translational research in osteoarthritis-associated chronic pain. PMID:24859251
Tackling student neurophobia in neurosciences block with team-based learning.
Anwar, Khurshid; Shaikh, Abdul A; Sajid, Muhammad R; Cahusac, Peter; Alarifi, Norah A; Al Shedoukhy, Ahlam
2015-01-01
Traditionally, neurosciences is perceived as a difficult course in undergraduate medical education with literature suggesting use of the term "Neurophobia" (fear of neurology among medical students). Instructional strategies employed for the teaching of neurosciences in undergraduate curricula traditionally include a combination of lectures, demonstrations, practical classes, problem-based learning and clinico-pathological conferences. Recently, team-based learning (TBL), a student-centered instructional strategy, has increasingly been regarded by many undergraduate medical courses as an effective method to assist student learning. In this study, 156 students of year-three neuroscience block were divided into seven male and seven female groups, comprising 11-12 students in each group. TBL was introduced during the 6 weeks of this block, and a total of eight TBL sessions were conducted during this duration. We evaluated the effect of TBL on student learning and correlated it with the student's performance in summative assessment. Moreover, the students' perceptions regarding the process of TBL was assessed by online survey. We found that students who attended TBL sessions performed better in the summative examinations as compared to those who did not. Furthermore, students performed better in team activities compared to individual testing, with male students performing better with a more favorable impact on their grades in the summative examination. There was an increase in the number of students achieving higher grades (grade B and above) in this block when compared to the previous block (51.7% vs. 25%). Moreover, the number of students at risk for lower grades (Grade B- and below) decreased in this block when compared to the previous block (30.6% vs. 55%). Students generally elicited a favorable response regarding the TBL process, as well as expressed satisfaction with the content covered and felt that such activities led to improvement in communication and interpersonal skills. We conclude that implementing TBL strategy increased students' responsibility for their own learning and helped the students in bridging the gap in their cognitive knowledge to tackle 'neurophobia' in a difficult neurosciences block evidenced by their improved performance in the summative assessment.
Vertical integration of basic science in final year of medical education
Rajan, Sudha Jasmine; Jacob, Tripti Meriel; Sathyendra, Sowmya
2016-01-01
Background: Development of health professionals with ability to integrate, synthesize, and apply knowledge gained through medical college is greatly hampered by the system of delivery that is compartmentalized and piecemeal. There is a need to integrate basic sciences with clinical teaching to enable application in clinical care. Aim: To study the benefit and acceptance of vertical integration of basic science in final year MBBS undergraduate curriculum. Materials and Methods: After Institutional Ethics Clearance, neuroanatomy refresher classes with clinical application to neurological diseases were held as part of the final year posting in two medical units. Feedback was collected. Pre- and post-tests which tested application and synthesis were conducted. Summative assessment was compared with the control group of students who had standard teaching in other two medical units. In-depth interview was conducted on 2 willing participants and 2 teachers who did neurology bedside teaching. Results: Majority (>80%) found the classes useful and interesting. There was statistically significant improvement in the post-test scores. There was a statistically significant difference between the intervention and control groups' scores during summative assessment (76.2 vs. 61.8 P < 0.01). Students felt that it reinforced, motivated self-directed learning, enabled correlations, improved understanding, put things in perspective, gave confidence, aided application, and enabled them to follow discussions during clinical teaching. Conclusion: Vertical integration of basic science in final year was beneficial and resulted in knowledge gain and improved summative scores. The classes were found to be useful, interesting and thought to help in clinical care and application by majority of students. PMID:27563584
Time to CUSUM: simplified reporting of outcomes in colorectal surgery.
Bowles, Thomas A; Watters, David A
2007-07-01
Surgical audit has added value when outcomes can be compared and individual surgeons receive feedback. It is expected that surgeons compare their results with others in similar local practice, the published work, or peers from a craft group audit. Although feedback and comparison are worthy aims, for many surgeons the standards have not been agreed nor is there a craft group audit. The aim of this paper was to develop a reporting format for surgeons carrying out colorectal surgery in a regional hospital. The performance of 13 individual surgeons was analysed using a comprehensive colorectal audit with more than 600 cases. Feedback included caseload and type. Risk stratification of outcomes included; operation urgency, age and Physiological and Operative Severity Score for enUmeration of Mortality and Morbidity. Outcome measures were anastomotic leaks, end stoma rates, unplanned reoperations and mortality. Visual feedback included cumulative summation graphs for elective leaks, mortality and unplanned reoperations. A single A4 page of an individuals performance could be prepared that allowed comparison to the groups data overall. Alerts were set at 2-5% elective leaks, 4-7.5% mortality and 4-11% unplanned return to theatre. Cumulative summation graphs added to this allowed a visual guide to the key performance indicators. Surgeons need to determine how they will review their individual and collective results. These are equally important to the reported work. Detailed analysis of risk-stratified data should occur. Binary outcomes such as leak, mortality and unplanned reoperations may be followed by cumulative summation graphs. This provides a continually updated method of feedback, enabling immediate visual feedback of a surgeon's performance.
Mini-Sosie high-resolution seismic method aids hazards studies
Stephenson, W.J.; Odum, J.; Shedlock, K.M.; Pratt, T.L.; Williams, R.A.
1992-01-01
The Mini-Sosie high-resolution seismic method has been effective in imaging shallow-structure and stratigraphic features that aid in seismic-hazard and neotectonic studies. The method is not an alternative to Vibroseis acquisition for large-scale studies. However, it has two major advantages over Vibroseis as it is being used by the USGS in its seismic-hazards program. First, the sources are extremely portable and can be used in both rural and urban environments. Second, the shifting-and-summation process during acquisition improves the signal-to-noise ratio and cancels out seismic noise sources such as cars and pedestrians. -from Authors
NASA Astrophysics Data System (ADS)
Sagui, Celeste
2006-03-01
An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
ERIC Educational Resources Information Center
Huang, Vicki
2017-01-01
To the author's knowledge, this is the first Australian study to empirically compare the use of a multiple-choice questionnaire (MCQ) with the use of a written assignment for interim, summative law school assessment. This study also surveyed the same student sample as to what types of assessments are preferred and why. In total, 182 undergraduate…
ERIC Educational Resources Information Center
Siweya, Hlengani J.; Letsoalo, Peter
2014-01-01
This study investigated whether formative assessment is a predictor of summative assessment in a university first-year chemistry class. The sample comprised a total of 1687 first-year chemistry students chosen from the 2011 and 2012 cohorts. Both simple and multiple linear regression (SLR and MLR) techniques were applied to perform the primary aim…
A Semi-Vectorization Algorithm to Synthesis of Gravitational Anomaly Quantities on the Earth
NASA Astrophysics Data System (ADS)
Abdollahzadeh, M.; Eshagh, M.; Najafi Alamdari, M.
2009-04-01
The Earth's gravitational potential can be expressed by the well-known spherical harmonic expansion. The computational time of summing up this expansion is an important practical issue which can be reduced by an efficient numerical algorithm. This paper proposes such a method for block-wise synthesizing the anomaly quantities on the Earth surface using vectorization. Fully-vectorization means transformation of the summations to the simple matrix and vector products. It is not a practical for the matrices with large dimensions. Here a semi-vectorization algorithm is proposed to avoid working with large vectors and matrices. It speeds up the computations by using one loop for the summation either on degrees or on orders. The former is a good option to synthesize the anomaly quantities on the Earth surface considering a digital elevation model (DEM). This approach is more efficient than the two-step method which computes the quantities on the reference ellipsoid and continues them upward to the Earth surface. The algorithm has been coded in MATLAB which synthesizes a global grid of 5â²Ã- 5â² (corresponding 9 million points) of gravity anomaly or geoid height using a geopotential model to degree 360 in 10000 seconds by an ordinary computer with 2G RAM.
Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P
2011-09-01
The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.
Changing the culture of assessment: the dominance of the summative assessment paradigm.
Harrison, Christopher J; Könings, Karen D; Schuwirth, Lambert W T; Wass, Valerie; van der Vleuten, Cees P M
2017-04-28
Despite growing evidence of the benefits of including assessment for learning strategies within programmes of assessment, practical implementation of these approaches is often problematical. Organisational culture change is often hindered by personal and collective beliefs which encourage adherence to the existing organisational paradigm. We aimed to explore how these beliefs influenced proposals to redesign a summative assessment culture in order to improve students' use of assessment-related feedback. Using the principles of participatory design, a mixed group comprising medical students, clinical teachers and senior faculty members was challenged to develop radical solutions to improve the use of post-assessment feedback. Follow-up interviews were conducted with individual members of the group to explore their personal beliefs about the proposed redesign. Data were analysed using a socio-cultural lens. Proposed changes were dominated by a shared belief in the primacy of the summative assessment paradigm, which prevented radical redesign solutions from being accepted by group members. Participants' prior assessment experiences strongly influenced proposals for change. As participants had largely only experienced a summative assessment culture, they found it difficult to conceptualise radical change in the assessment culture. Although all group members participated, students were less successful at persuading the group to adopt their ideas. Faculty members and clinical teachers often used indirect techniques to close down discussions. The strength of individual beliefs became more apparent in the follow-up interviews. Naïve epistemologies and prior personal experiences were influential in the assessment redesign but were usually not expressed explicitly in a group setting, perhaps because of cultural conventions of politeness. In order to successfully implement a change in assessment culture, firmly-held intuitive beliefs about summative assessment will need to be clearly understood as a first step.
Student-written single-best answer questions predict performance in finals.
Walsh, Jason; Harris, Benjamin; Tayyaba, Saadia; Harris, David; Smith, Phil
2016-10-01
Single-best answer (SBA) questions are widely used for assessment in medical schools; however, often clinical staff have neither the time nor the incentive to develop high-quality material for revision purposes. A student-led approach to producing formative SBA questions offers a potential solution. Cardiff University School of Medicine students created a bank of SBA questions through a previously described staged approach, involving student question-writing, peer-review and targeted senior clinician input. We arranged questions into discrete tests and posted these online. Student volunteer performance on these tests from the 2012/13 cohort of final-year medical students was recorded and compared with the performance of these students in medical school finals (knowledge and objective structured clinical examinations, OSCEs). In addition, we compared the performance of students that participated in question-writing groups with the performance of the rest of the cohort on the summative SBA assessment. Often clinical staff have neither the time nor the incentive to develop high-quality material for revision purposes Performance in the end-of-year summative clinical knowledge SBA paper correlated strongly with performance in the formative student-written SBA test (r = ~0.60, p <0.01). There was no significant correlation between summative OSCE scores and formative student-written SBA test scores. Students who wrote and reviewed questions scored higher than average in the end-of-year summative clinical knowledge SBA paper. Student-written SBAs predict performance in end-of-year SBA examinations, and therefore can provide a potentially valuable revision resource. There is potential for student-written questions to be incorporated into summative examinations. © 2015 John Wiley & Sons Ltd.
The effect of Bangerter filters on binocular function in observers with amblyopia.
Chen, Zidong; Li, Jinrong; Thompson, Benjamin; Deng, Daming; Yuan, Junpeng; Chan, Lily; Hess, Robert F; Yu, Minbin
2014-10-28
We assessed whether partial occlusion of the nonamblyopic eye with Bangerter filters can immediately reduce suppression and promote binocular summation of contrast in observers with amblyopia. In Experiment 1, suppression was measured for 22 observers (mean age, 20 years; range, 14-32 years; 10 females) with strabismic or anisometropic amblyopia and 10 controls using our previously established "balance point" protocol. Measurements were made at baseline and with 0.6-, 0.4-, and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. In Experiment 2, psychophysical measurements of contrast sensitivity were made under binocular and monocular viewing conditions for 25 observers with anisometropic amblyopia (mean age, 17 years; range, 11-28 years; 14 females) and 22 controls (mean age, 24 years; range, 22-27; 12 female). Measurements were made at baseline, and with 0.4- and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. Binocular summation ratios (BSRs) were calculated at baseline and with Bangerter filters in place. Experiment 1: Bangerter filters reduced suppression in observers with amblyopia and induced suppression in controls (P = 0.025). The 0.2-strength filter eliminated suppression in observers with amblyopia and this was not a visual acuity effect. Experiment 2: Bangerter filters were able to induce normal levels of binocular contrast summation in the group of observers with anisometropic amblyopia for a stimulus with a spatial frequency of 3 cycles per degree (cpd, P = 0.006). The filters reduced binocular summation in controls. Bangerter filters can immediately reduce suppression and promote binocular summation for mid/low spatial frequencies in observers with amblyopia. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Theta frequency background tunes transmission but not summation of spiking responses.
Parameshwaran, Dhanya; Bhalla, Upinder S
2013-01-01
Hippocampal neurons are known to fire as a function of frequency and phase of spontaneous network rhythms, associated with the animal's behaviour. This dependence is believed to give rise to precise rate and temporal codes. However, it is not well understood how these periodic membrane potential fluctuations affect the integration of synaptic inputs. Here we used sinusoidal current injection to the soma of CA1 pyramidal neurons in the rat brain slice to simulate background oscillations in the physiologically relevant theta and gamma frequency range. We used a detailed compartmental model to show that somatic current injection gave comparable results to more physiological synaptically driven theta rhythms incorporating excitatory input in the dendrites, and inhibitory input near the soma. We systematically varied the phase of synaptic inputs with respect to this background, and recorded changes in response and summation properties of CA1 neurons using whole-cell patch recordings. The response of the cell was dependent on both the phase of synaptic inputs and frequency of the background input. The probability of the cell spiking for a given synaptic input was up to 40% greater during the depolarized phases between 30-135 degrees of theta frequency current injection. Summation gain on the other hand, was not affected either by the background frequency or the phasic afferent inputs. This flat summation gain, coupled with the enhanced spiking probability during depolarized phases of the theta cycle, resulted in enhanced transmission of summed inputs during the same phase window of 30-135 degrees. Overall, our study suggests that although oscillations provide windows of opportunity to selectively boost transmission and EPSP size, summation of synaptic inputs remains unaffected during membrane oscillations.
Jørgensen, Tanja Schjødt; Henriksen, Marius; Rosager, Sara; Klokker, Louise; Ellegaard, Karen; Danneskiold-Samsøe, Bente; Bliddal, Henning; Graven-Nielsen, Thomas
2017-12-29
Background and aims Despite the high prevalence of knee osteoarthritis (OA) it remains one of the most frequent knee disorders without a cure. Pain and disability are prominent clinical features of knee OA. Knee OA pain is typically localized but can also be referred to the thigh or lower leg. Widespread hyperalgesia has been found in knee OA patients. In addition, patients with hyperalgesia in the OA knee joint show increased pain summation scores upon repetitive stimulation of the OA knee suggesting the involvement of facilitated central mechanisms in knee OA. The dynamics of the pain system (i.e., the adaptive responses to pain) has been widely studied, but mainly from experiments on healthy subjects, whereas less is known about the dynamics of the pain system in chronic pain patients, where the pain system has been activated for a long time. The aim of this study was to assess the dynamics of the nociceptive system quantitatively in knee osteoarthritis (OA) patients before and after induction of experimental knee pain. Methods Ten knee osteoarthritis (OA) patients participated in this randomized crossover trial. Each subject was tested on two days separated by 1 week. The most affected knee was exposed to experimental pain or control, in a randomized sequence, by injection of hypertonic saline into the infrapatellar fat pad and a control injection of isotonic saline. Pain areas were assessed by drawings on anatomical maps. Pressure pain thresholds (PPT) at the knee, thigh, lower leg, and arm were assessed before, during, and after the experimental pain and control conditions. Likewise, temporal summation of pressure pain on the knee, thigh and lower leg muscles was assessed. Results Experimental knee pain decreased the PPTs at the knee (P <0.01) and facilitated the temporal summation on the knee and adjacent muscles (P < 0.05). No significant difference was found at the control site (the contralateral arm) (P =0.77). Further, the experimental knee pain revealed overall higher VAS scores (facilitated temporal summation of pain) at the knee (P < 0.003) and adjacent muscles (P < 0.0001) compared with the control condition. The experimental knee pain areas were larger compared with the OA knee pain areas before the injection. Conclusions Acute experimental knee pain induced in patients with knee OA caused hyperalgesia and facilitated temporal summation of pain at the knee and surrounding muscles, illustrating that the pain system in individuals with knee OA can be affected even after many years of nociceptive input. This study indicates that the adaptability in the pain system is intact in patients with knee OA, which opens for opportunities to prevent development of centralized pain syndromes.
Subiaul, Francys; Krajkowski, Edward; Price, Elizabeth E.; Etz, Alexander
2015-01-01
Children are exceptional, even ‘super,’ imitators but comparatively poor independent problem-solvers or innovators. Yet, imitation and innovation are both necessary components of cumulative cultural evolution. Here, we explored the relationship between imitation and innovation by assessing children’s ability to generate a solution to a novel problem by imitating two different action sequences demonstrated by two different models, an example of imitation by combination, which we refer to as “summative imitation.” Children (N = 181) from 3 to 5 years of age and across three experiments were tested in a baseline condition or in one of six demonstration conditions, varying in the number of models and opening techniques demonstrated. Across experiments, more than 75% of children evidenced summative imitation, opening both compartments of the problem box and retrieving the reward hidden in each. Generally, learning different actions from two different models was as good (and in some cases, better) than learning from 1 model, but the underlying representations appear to be the same in both demonstration conditions. These results show that summative imitation not only facilitates imitation learning but can also result in new solutions to problems, an essential feature of innovation and cumulative culture. PMID:26441782
Kealy-Bateman, Warren; Kotze, Beth; Lampe, Lisa
2016-12-01
To provide information relevant to decision-making around the timing of attempting the centrally administered summative assessments in the Royal Australian and New Zealand College of Psychiatrists (RANZCP) 2012 Fellowship Program. We consider the new Competency-Based Fellowship Program of the RANZCP and its underlying philosophy, the trainee trajectory within the program and the role of the supervisor. The relationship between workplace-based and external assessments is discussed. The timing of attempting centrally administered summative assessments is considered within the pedagogical framework of medical competencies development. Although successful completion of all the centrally administered summative assessments requires demonstration of a junior consultant standard of competency, the timing at which this standard will most commonly be achieved is likely to vary from assessment to assessment. There are disadvantages attendant upon prematurely attempting assessments, and trainees are advised to carefully consider the requirements of each assessment and match this against their current level of knowledge and skills. Trainees and supervisors need to be clear about the competencies required for each of the external assessments and match this against the trainee's current competencies to assist in decision-making about the timing of assessments and planning for future learning. © The Royal Australian and New Zealand College of Psychiatrists 2016.
Experimental and modeling results of creep fatigue life of Inconel 617 and Haynes 230 at 850 C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiang; Sokolov, Mikhail A; Sham, Sam
Creep fatigue testing of Ni-based superalloy Inconel 617 and Haynes 230 were conducted in the air at 850 C. Tests were performed with fully reversed axial strain control at a total strain range of 0.5%, 1.0% or 1.5% and hold time at maximum tensile strain for 3, 10 or 30 min. In addition, two creep fatigue life prediction methods, i.e. linear damage summation and frequency-modified tensile hysteresis energy modeling, were evaluated and compared with experimental results. Under all creep fatigue tests, Haynes 230 performed better than Inconel 617. Compared to the low cycle fatigue life, the cycles to failure formore » both materials decreased under creep fatigue test conditions. Longer hold time at maximum tensile strain would cause a further reduction in both material creep fatigue life. The linear damage summation could predict the creep fatigue life of Inconel 617 for limited test conditions, but considerably underestimated the creep fatigue life of Haynes 230. In contrast, frequency-modified tensile hysteresis energy modeling showed promising creep fatigue life prediction results for both materials.« less
NASA Astrophysics Data System (ADS)
Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.
2015-06-01
Non-linear entropy stability and a summation-by-parts (SBP) framework are used to derive entropy stable interior interface coupling for the semi-discretized three-dimensional (3D) compressible Navier-Stokes equations. A complete semi-discrete entropy estimate for the interior domain is achieved combining a discontinuous entropy conservative operator of any order [1,2] with an entropy stable coupling condition for the inviscid terms, and a local discontinuous Galerkin (LDG) approach with an interior penalty (IP) procedure for the viscous terms. The viscous penalty contributions scale with the inverse of the Reynolds number (Re) so that for Re → ∞ their contributions vanish and only the entropy stable inviscid interface penalty term is recovered. This paper extends the interface couplings presented [1,2] and provides a simple and automatic way to compute the magnitude of the viscous IP term. The approach presented herein is compatible with any diagonal norm summation-by-parts (SBP) spatial operator, including finite element, finite volume, finite difference schemes and the class of high-order accurate methods which include the large family of discontinuous Galerkin discretizations and flux reconstruction schemes.
Ground state energies from converging and diverging power series expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisowski, C.; Norris, S.; Pelphrey, R.
2016-10-15
It is often assumed that bound states of quantum mechanical systems are intrinsically non-perturbative in nature and therefore any power series expansion methods should be inapplicable to predict the energies for attractive potentials. However, if the spatial domain of the Schrödinger Hamiltonian for attractive one-dimensional potentials is confined to a finite length L, the usual Rayleigh–Schrödinger perturbation theory can converge rapidly and is perfectly accurate in the weak-binding region where the ground state’s spatial extension is comparable to L. Once the binding strength is so strong that the ground state’s extension is less than L, the power expansion becomes divergent,more » consistent with the expectation that bound states are non-perturbative. However, we propose a new truncated Borel-like summation technique that can recover the bound state energy from the diverging sum. We also show that perturbation theory becomes divergent in the vicinity of an avoided-level crossing. Here the same numerical summation technique can be applied to reproduce the energies from the diverging perturbative sums.« less
Experimental and modeling results of creep-fatigue life of Inconel 617 and Haynes 230 at 850 °C
NASA Astrophysics Data System (ADS)
Chen, Xiang; Sokolov, Mikhail A.; Sham, Sam; Erdman, Donald L., III; Busby, Jeremy T.; Mo, Kun; Stubbins, James F.
2013-01-01
Creep-fatigue testing of Ni-based superalloy Inconel 617 and Haynes 230 were conducted in the air at 850 °C. Tests were performed with fully reversed axial strain control at a total strain range of 0.5%, 1.0% or 1.5% and hold time at maximum tensile strain for 3, 10 or 30 min. In addition, two creep-fatigue life prediction methods, i.e. linear damage summation and frequency-modified tensile hysteresis energy modeling, were evaluated and compared with experimental results. Under all creep-fatigue tests, Haynes 230 performed better than Inconel 617. Compared to the low cycle fatigue life, the cycles to failure for both materials decreased under creep-fatigue test conditions. Longer hold time at maximum tensile strain would cause a further reduction in both material creep-fatigue life. The linear damage summation could predict the creep-fatigue life of Inconel 617 for limited test conditions, but considerably underestimated the creep-fatigue life of Haynes 230. In contrast, frequency-modified tensile hysteresis energy modeling showed promising creep-fatigue life prediction results for both materials.
ERIC Educational Resources Information Center
Wilson, Jennifer L.
2010-01-01
The study analyzed 2005 posttest data compared to 2008 posttest data to determine student end of school year academic achievement outcomes across three academic levels (above average, average, and below average chemistry potential) and two teacher homework evaluation methods (assigned but not graded and assigned and graded) on teacher prepared…
Harvest prediction in 'Algerie' loquat.
Hueso, Juan J; Pérez, Mercedes; Alonso, Francisca; Cuevas, Julián
2007-05-01
Plant phenology is in great measure driven by air temperature. To forecast harvest time for 'Algerie' loquat accurately, the growing degree days (GDD) needed from bloom to ripening were determined using data from nine seasons. The methods proposed by Zalom et al. (Zalom FG, Goodell PB, Wilson LT, Barnett WW, Bentley W, Degree-days: the calculation and use of heat units in pest management, leaflet no 21373, Division Agriculture and Natural Resources, University of California 10 pp, 1983) were compared as regards their ability to estimate heat summation based on hourly records. All the methods gave remarkably similar results for our cultivation area, although the double-sine method showed higher performance when temperatures were low. A base temperature of 3 degrees C is proposed for 'Algerie' loquat because it provides a coefficient of variation in GDD among seasons of below 5%, and because of its compatibility with loquat growth. Based on these determinations, 'Algerie' loquat requires 1,715 GDD from bloom to harvest; under our conditions this heat is accumulated over an average of 159 days. Our procedure permits the 'Algerie' harvest date to be estimated with a mean error of 4.4 days (<3% for the bloom-harvest period). GDD summation did not prove superior to the use of the number of calendar days for predicting 'Algerie' harvest under non-limiting growing conditions. However, GDD reflects the developmental rate in water-stressed trees better than calendar days. Trees under deficit irrigation during flower development required more time and more heat to ripen their fruits.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
NASA Astrophysics Data System (ADS)
Rezaei, Fatemeh; Tavassoli, Seyed Hassan
2016-11-01
In this paper, a study is performed on the spectral lines of plasma radiations created from focusing of the Nd:YAG laser on Al standard alloys at atmospheric air pressure. A new theoretical method is presented to investigate the evolution of the optical depth of the plasma based on the radiative transfer equation, in LTE condition. This work relies on the Boltzmann distribution, lines broadening equations, and as well as the self-absorption relation. Then, an experimental set-up is devised to extract some of plasma parameters such as temperature from modified line ratio analysis, electron density from Stark broadening mechanism, line intensities of two spectral lines in the same order of ionization from similar species, and the plasma length from the shadowgraphy section. In this method, the summation and the ratio of two spectral lines are considered for evaluation of the temporal variations of the plasma parameters in a LIBS homogeneous plasma. The main advantage of this method is that it comprises the both of thin and thick laser induced plasmas without straight calculation of self-absorption coefficient. Moreover, the presented model can also be utilized for evaluation the transition of plasma from the thin condition to the thick one. The results illustrated that by measuring the line intensities of two spectral lines at different evolution times, the plasma cooling and the growth of the optical depth can be followed.
Macaluso, Stephanie; Marcus, Andrea Fleisch; Rigassio-Radler, Diane; Byham-Gray, Laura D; Touger-Decker, Riva
2015-11-01
To determine the relationship between physical activity (PA) and health-related quality of life among university employees who enrolled in a worksite wellness program (WWP). The study was an interim analysis of data collected in a WWP. The sample consisted of 64 participants who completed 12- and 26-week follow-up appointments. Self-reported anxiety days significantly decreased from baseline to week 12. There were positive trends in self-rated health, vitality days, and summative unhealthy days from baseline to week 26. Among those with a self-reported history of hypertension (HTN), there was an inverse correlation between PA and summative physically and mentally unhealthy days at week 12. Among participants in this WWP with HTN, as PA increased there was a significant decrease in summative physically and mentally unhealthy days at week 12.
Arthur, Winfred; Cho, Inchul; Muñoz, Gonzalo J
2016-10-01
We examined the so-called "red effect" in the context of higher education summative exams under the premise that unlike the conditions or situations where this effect typically has been obtained, the totality of factors, such as higher motivation, familiarity with exam material, and more reliance on domain knowledge that characterize high-stakes testing such as those in operational educational settings, are likely to mitigate any color effects. Using three naturally occurring archival data sets in which students took exams on either red or green exam booklets, the results indicated that booklet color (red vs. green) did not affect exam performance. From a scientific perspective, the results suggest that color effects may be attenuated by factors that characterize high-stakes assessments, and from an applied perspective, they suggest that the choice of red vs. green exam booklets in higher education summative evaluations is likely not a concern.
Valero, Germán; Cárdenas, Paula
The Faculty of Veterinary Medicine and Animal Science of the National Autonomous University of Mexico (UNAM) uses the Moodle learning management system for formative and summative computer assessment. The authors of this article-the teacher primarily responsible for Moodle implementation and a researcher who is a recent Moodle adopter-describe and discuss the students' and teachers' attitudes to summative and formative computer assessment in Moodle. Item analysis of quiz results helped us to identify and fix poorly performing questions, which greatly reduced student complaints and improved objective assessment. The use of certainty-based marking (CBM) in formative assessment in veterinary pathology was well received by the students and should be extended to more courses. The importance of having proficient computer support personnel should not be underestimated. A properly translated language pack is essential for the use of Moodle in a language other than English.
Unseen stimuli modulate conscious visual experience: evidence from inter-hemispheric summation.
de Gelder, B; Pourtois, G; van Raamsdonk, M; Vroomen, J; Weiskrantz, L
2001-02-12
Emotional facial expression can be discriminated despite extensive lesions of striate cortex. Here we report differential performance with recognition of facial stimuli in the intact visual field depending on simultaneous presentation of congruent or incongruent stimuli in the blind field. Three experiments were based on inter-hemispheric summation. Redundant stimulation in the blind field led to shorter latencies for stimulus detection in the intact field. Recognition of the expression of a half-face expression in the intact field was faster when the other half of the face presented to the blind field had a congruent expression. Finally, responses to the expression of whole faces to the intact field were delayed for incongruent facial expressions presented in the blind field. These results indicate that the neuro-anatomical pathways (extra-striate cortical and sub-cortical) sustaining inter-hemispheric summation can operate in the absence of striate cortex.
Shortening the Xerostomia Inventory
Thomson, William Murray; van der Putten, Gert-Jan; de Baat, Cees; Ikebe, Kazunori; Matsuda, Ken-ichi; Enoki, Kaori; Hopcraft, Matthew; Ling, Guo Y
2011-01-01
Objectives To determine the validity and properties of the Summated Xerostomia Inventory-Dutch Version in samples from Australia, The Netherlands, Japan and New Zealand. Study design Six cross-sectional samples of older people from The Netherlands (N = 50), Australia (N = 637 and N = 245), Japan (N = 401) and New Zealand (N = 167 and N = 86). Data were analysed using the Summated Xerostomia Inventory-Dutch Version. Results Almost all data-sets revealed a single extracted factor which explained about half of the variance, with Cronbach’s alpha values of at least 0.70. When mean scale scores were plotted against a “gold standard” xerostomia question, statistically significant gradients were observed, with the highest score seen in those who always had dry mouth, and the lowest in those who never had it. Conclusion The Summated Xerostomia Inventory-Dutch Version is valid for measuring xerostomia symptoms in clinical and epidemiological research. PMID:21684773
Co-occurring risk factors for current cigarette smoking in a U.S. nationally representative sample
Higgins, Stephen T.; Kurti, Allison N.; Redner, Ryan; White, Thomas J.; Keith, Diana R.; Gaalema, Diann E.; Sprague, Brian L.; Stanton, Cassandra A.; Roberts, Megan E.; Doogan, Nathan J.; Priest, Jeff S.
2016-01-01
Introduction Relatively little has been reported characterizing cumulative risk associated with co-occurring risk factors for cigarette smoking. The purpose of the present study was to address that knowledge gap in a U.S. nationally representative sample. Methods Data were obtained from 114,426 adults (≥ 18 years) in the U.S. National Survey on Drug Use and Health (years 2011–13). Multiple logistic regression and classification and regression tree (CART) modeling were used to examine risk of current smoking associated with eight co-occurring risk factors (age, gender, race/ethnicity, educational attainment, poverty, drug abuse/dependence, alcohol abuse/dependence, mental illness). Results Each of these eight risk factors was independently associated with significant increases in the odds of smoking when concurrently present in a multiple logistic regression model. Effects of risk-factor combinations were typically summative. Exceptions to that pattern were in the direction of less-than-summative effects when one of the combined risk factors was associated with generally high or low rates of smoking (e.g., drug abuse/dependence, age ≥65). CART modeling identified subpopulation risk profiles wherein smoking prevalence varied from a low of 11% to a high of 74% depending on particular risk factor combinations. Being a college graduate was the strongest independent predictor of smoking status, classifying 30% of the adult population. Conclusions These results offer strong evidence that the effects associated with common risk factors for cigarette smoking are independent, cumulative, and generally summative. The results also offer potentially useful insights into national population risk profiles around which U.S. tobacco policies can be developed or refined. PMID:26902875
Modified Monovision With Spherical Aberration to Improve Presbyopic Through-Focus Visual Performance
Zheleznyak, Len; Sabesan, Ramkumar; Oh, Je-Sun; MacRae, Scott; Yoon, Geunyoung
2013-01-01
Purpose. To investigate the impact on visual performance of modifying monovision with monocularly induced spherical aberration (SA) to increase depth of focus (DoF), thereby enhancing binocular through-focus visual performance. Methods. A binocular adaptive optics (AO) vision simulator was used to correct both eyes' native aberrations and induce traditional (TMV) and modified (MMV) monovision corrections. TMV was simulated with 1.5 diopters (D) of anisometropia (dominant eye at distance, nondominant eye at near). Zernike primary SA was induced in the nondominant eye in MMV. A total of four MMV conditions were tested with various amounts of SA (±0.2 and ±0.4 μm) and fixed anisometropia (1.5 D). Monocular and binocular visual acuity (VA) and contrast sensitivity (CS) at 10 cyc/deg and binocular summation were measured through-focus in three cyclopledged subjects with 4-mm pupils. Results. MMV with positive SA had a larger benefit for intermediate distances (1.5 lines at 1.0 D) than with negative SA, compared with TMV. Negative SA had a stronger benefit in VA at near. DoF of all MMV conditions was 3.5 ± 0.5 D (mean) as compared with TMV (2.7 ± 0.3 D). Through-focus CS at 10 cyc/deg was significantly reduced with MMV as compared to TMV only at intermediate object distances, however was unaffected at distance. Binocular summation was absent at all object distances except 0.5 D, where it improved in MMV by 19% over TMV. Conclusions. Modified monovision with SA improves through-focus VA and DoF as compared with traditional monovision. Binocular summation also increased as interocular similarity of image quality increased due to extended monocular DoF. PMID:23557742
Super-Resolution Community Detection for Layer-Aggregated Multilayer Networks
Taylor, Dane; Caceres, Rajmonda S.; Mucha, Peter J.
2017-01-01
Applied network science often involves preprocessing network data before applying a network-analysis method, and there is typically a theoretical disconnect between these steps. For example, it is common to aggregate time-varying network data into windows prior to analysis, and the trade-offs of this preprocessing are not well understood. Focusing on the problem of detecting small communities in multilayer networks, we study the effects of layer aggregation by developing random-matrix theory for modularity matrices associated with layer-aggregated networks with N nodes and L layers, which are drawn from an ensemble of Erdős–Rényi networks with communities planted in subsets of layers. We study phase transitions in which eigenvectors localize onto communities (allowing their detection) and which occur for a given community provided its size surpasses a detectability limit K*. When layers are aggregated via a summation, we obtain K∗∝O(NL/T), where T is the number of layers across which the community persists. Interestingly, if T is allowed to vary with L, then summation-based layer aggregation enhances small-community detection even if the community persists across a vanishing fraction of layers, provided that T/L decays more slowly than 𝒪(L−1/2). Moreover, we find that thresholding the summation can, in some cases, cause K* to decay exponentially, decreasing by orders of magnitude in a phenomenon we call super-resolution community detection. In other words, layer aggregation with thresholding is a nonlinear data filter enabling detection of communities that are otherwise too small to detect. Importantly, different thresholds generally enhance the detectability of communities having different properties, illustrating that community detection can be obscured if one analyzes network data using a single threshold. PMID:29445565
Super-Resolution Community Detection for Layer-Aggregated Multilayer Networks.
Taylor, Dane; Caceres, Rajmonda S; Mucha, Peter J
2017-01-01
Applied network science often involves preprocessing network data before applying a network-analysis method, and there is typically a theoretical disconnect between these steps. For example, it is common to aggregate time-varying network data into windows prior to analysis, and the trade-offs of this preprocessing are not well understood. Focusing on the problem of detecting small communities in multilayer networks, we study the effects of layer aggregation by developing random-matrix theory for modularity matrices associated with layer-aggregated networks with N nodes and L layers, which are drawn from an ensemble of Erdős-Rényi networks with communities planted in subsets of layers. We study phase transitions in which eigenvectors localize onto communities (allowing their detection) and which occur for a given community provided its size surpasses a detectability limit K * . When layers are aggregated via a summation, we obtain [Formula: see text], where T is the number of layers across which the community persists. Interestingly, if T is allowed to vary with L , then summation-based layer aggregation enhances small-community detection even if the community persists across a vanishing fraction of layers, provided that T/L decays more slowly than ( L -1/2 ). Moreover, we find that thresholding the summation can, in some cases, cause K * to decay exponentially, decreasing by orders of magnitude in a phenomenon we call super-resolution community detection. In other words, layer aggregation with thresholding is a nonlinear data filter enabling detection of communities that are otherwise too small to detect. Importantly, different thresholds generally enhance the detectability of communities having different properties, illustrating that community detection can be obscured if one analyzes network data using a single threshold.
Evaluation of molecular dynamics simulation methods for ionic liquid electric double layers.
Haskins, Justin B; Lawson, John W
2016-05-14
We investigate how systematically increasing the accuracy of various molecular dynamics modeling techniques influences the structure and capacitance of ionic liquid electric double layers (EDLs). The techniques probed concern long-range electrostatic interactions, electrode charging (constant charge versus constant potential conditions), and electrolyte polarizability. Our simulations are performed on a quasi-two-dimensional, or slab-like, model capacitor, which is composed of a polarizable ionic liquid electrolyte, [EMIM][BF4], interfaced between two graphite electrodes. To ensure an accurate representation of EDL differential capacitance, we derive new fluctuation formulas that resolve the differential capacitance as a function of electrode charge or electrode potential. The magnitude of differential capacitance shows sensitivity to different long-range electrostatic summation techniques, while the shape of differential capacitance is affected by charging technique and the polarizability of the electrolyte. For long-range summation techniques, errors in magnitude can be mitigated by employing two-dimensional or corrected three dimensional electrostatic summations, which led to electric fields that conform to those of a classical electrostatic parallel plate capacitor. With respect to charging, the changes in shape are a result of ions in the Stern layer (i.e., ions at the electrode surface) having a higher electrostatic affinity to constant potential electrodes than to constant charge electrodes. For electrolyte polarizability, shape changes originate from induced dipoles that soften the interaction of Stern layer ions with the electrode. The softening is traced to ion correlations vertical to the electrode surface that induce dipoles that oppose double layer formation. In general, our analysis indicates an accuracy dependent differential capacitance profile that transitions from the characteristic camel shape with coarser representations to a more diffuse profile with finer representations.
NASA Astrophysics Data System (ADS)
Duchko, Andrey; Bykov, Alexandr
2015-06-01
Nowadays the task of spectra processing is as relevant as ever in molecular spectroscopy. Nevertheless, existing techniques of vibrational energy levels and wave functions computation often come to a dead-lock. Application of standard quantum-mechanical approaches often faces inextricable difficulties. Variational method requires unimaginable computational performance. On the other hand perturbational approaches beat against divergent series. That's why this problem faces an urgent need in application of specific resummation techniques. In this research Rayleigh-Schrödinger perturbation theory is applied to vibrational energy levels calculation of excited vibrational states of H_2CO. It is known that perturbation series diverge in the case of anharmonic resonance coupling between vibrational states [1]. Nevertheless, application of advanced divergent series summation techniques makes it possible to calculate the value of energy with high precision (more than 10 true digits) even for highly excited states of the molecule [2]. For this purposes we have applied several summation techniques based on high-order Pade-Hermite approximations. Our research shows that series behaviour completely depends on the singularities of complex energy function inside unit circle. That's why choosing an approximation function modelling this singularities allows to calculate the sum of divergent series. Our calculations for formaldehyde molecule show that the efficiency of each summation technique depends on the resonant type. REFERENCES 1. J. Cizek, V. Spirko, and O. Bludsky, ON THE USE OF DIVERGENT SERIES IN VIBRATIONAL SPECTROSCOPY. TWO- AND THREE-DIMENSIONAL OSCILLATORS, J. Chem. Phys. 99, 7331 (1993). 2. A. V. Sergeev and D. Z. Goodson, SINGULARITY ANALYSIS OF FOURTH-ORDER MöLLER-PLESSET PERTURBATION THEORY, J. Chem. Phys. 124, 4111 (2006).
Arba Mosquera, Samuel; Verma, Shwetabh
2016-01-01
We analyze the role of bilateral symmetry in enhancing binocular visual ability in human eyes, and further explore how efficiently bilateral symmetry is preserved in different ocular surgical procedures. The inclusion criterion for this review was strict relevance to the clinical questions under research. Enantiomorphism has been reported in lower order aberrations, higher order aberrations and cone directionality. When contrast differs in the two eyes, binocular acuity is better than monocular acuity of the eye that receives higher contrast. Anisometropia has an uncommon occurrence in large populations. Anisometropia seen in infancy and childhood is transitory and of little consequence for the visual acuity. Binocular summation of contrast signals declines with age, independent of inter-ocular differences. The symmetric associations between the right and left eye could be explained by the symmetry in pupil offset and visual axis which is always nasal in both eyes. Binocular summation mitigates poor visual performance under low luminance conditions and strong inter-ocular disparity detrimentally affects binocular summation. Considerable symmetry of response exists in fellow eyes of patients undergoing myopic PRK and LASIK, however the method to determine whether or not symmetry is maintained consist of comparing individual terms in a variety of ad hoc ways both before and after the refractive surgery, ignoring the fact that retinal image quality for any individual is based on the sum of all terms. The analysis of bilateral symmetry should be related to the patients' binocular vision status. The role of aberrations in monocular and binocular vision needs further investigation. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Computational scheme for pH-dependent binding free energy calculation with explicit solvent.
Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R
2016-01-01
We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.
NASA Astrophysics Data System (ADS)
Cherry, Simon; Ruffle, Jon
2014-06-01
Physics in Medicine and Biology (PMB) awards its 'Citations Prize' to the authors of the original research paper that has received the most citations in the preceding five years (according to the Institute for Scientific Information (ISI)). The lead author of the winning paper is presented with the Rotblat Medal (named in honour of Professor Sir Joseph Rotblat, a Nobel Prize winner who also was the second—and longest serving—Editor of PMB, from 1961-1972). The winner of the 2013 Citations Prize for the paper which has received the most citations in the previous five years (2008-2012) is Figure. Figure. Four of the prize winning authors. From left to right: Thomas Istel (Philips), Jens-Peter Schlomka (with medal, MorphoDetection), Ewald Roessl (Philips), and Gerhard Martens (Philips). Title: Experimental feasibility of multi-energy photon-counting K-edge imaging in pre-clinical computed tomography Authors: Jens Peter Schlomka1, Ewald Roessl1, Ralf Dorscheid2, Stefan Dill2, Gerhard Martens1, Thomas Istel1, Christian Bäumer3, Christoph Herrmann3, Roger Steadman3, Günter Zeitler3, Amir Livne4 and Roland Proksa1 Institutions: 1 Philips Research Europe, Sector Medical Imaging Systems, Hamburg, Germany 2 Philips Research Europe, Engineering & Technology, Aachen, Germany 3 Philips Research Europe, Sector Medical Imaging Systems, Aachen, Germany 4 Philips Healthcare, Global Research and Advanced Development, Haifa, Israel Reference: Schlomka et al 2008 Phys. Med. Biol. 53 4031-47 This paper becomes the first to win both this citations prize and also the PMB best paper prize (The Roberts Prize), which it won for the year 2008. Discussion of the significance of the winning paper can be found in this medicalphysicsweb article from the time of the Roberts Prize win (http://medicalphysicsweb.org/cws/article/research/39907). The author's enthusiasm for their prototype spectral CT system has certainly been reflected in the large number of citations the paper subsequently has received. Our warm congratulations go to the winning authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olteanu, Luiza A.M., E-mail: AnaMariaLuiza.Olteanu@uzgent.be; Madani, Indira; De Neve, Wilfried
Purpose: To assess the accuracy of contour deformation and feasibility of dose summation applying deformable image coregistration in adaptive dose painting by numbers (DPBN) for head and neck cancer. Methods and Materials: Data of 12 head-and-neck-cancer patients treated within a Phase I trial on adaptive {sup 18}F-FDG positron emission tomography (PET)-guided DPBN were used. Each patient had two DPBN treatment plans: the initial plan was based on a pretreatment PET/CT scan; the second adapted plan was based on a PET/CT scan acquired after 8 fractions. The median prescription dose to the dose-painted volume was 30 Gy for both DPBN plans.more » To obtain deformed contours and dose distributions, pretreatment CT was deformed to per-treatment CT using deformable image coregistration. Deformed contours of regions of interest (ROI{sub def}) were visually inspected and, if necessary, adjusted (ROI{sub def{sub ad}}) and both compared with manually redrawn ROIs (ROI{sub m}) using Jaccard (JI) and overlap indices (OI). Dose summation was done on the ROI{sub m}, ROI{sub def{sub ad}}, or their unions with the ROI{sub def}. Results: Almost all deformed ROIs were adjusted. The largest adjustment was made in patients with substantially regressing tumors: ROI{sub def} = 11.8 {+-} 10.9 cm{sup 3} vs. ROI{sub def{sub ad}} = 5.9 {+-} 7.8 cm{sup 3} vs. ROI{sub m} = 7.7 {+-} 7.2 cm{sup 3} (p = 0.57). The swallowing structures were the most frequently adjusted ROIs with the lowest indices for the upper esophageal sphincter: JI = 0.3 (ROI{sub def}) and 0.4 (ROI{sub def{sub ad}}); OI = 0.5 (both ROIs). The mandible needed the least adjustment with the highest indices: JI = 0.8 (both ROIs), OI = 0.9 (ROI{sub def}), and 1.0 (ROI{sub def{sub ad}}). Summed doses differed non-significantly. There was a trend of higher doses in the targets and lower doses in the spinal cord when doses were summed on unions. Conclusion: Visual inspection and adjustment were necessary for most ROIs. Fast automatic ROI propagation followed by user-driven adjustment appears to be more efficient than labor-intensive de novo drawing. Dose summation using deformable image coregistration was feasible. Biological uncertainties of dose summation strategies warrant further investigation.« less
2017-10-01
electrical temporal summation, and low socioeconomic status 7 predict chronic post-traumatic pain occurrence. Pressure-pain threshold- conditioned...psychological state of the patients b. Acute head pain, higher electrical temporal summation, and low socioeconomic status predict chronic post-traumatic...and neck pain patients Award Number: W81XWH-15-1-0603 PI: David Yarnitsky Org: Technion – Israel Institute of Technology Award Amount: $1,499,904
ERIC Educational Resources Information Center
DeLoach, Regina M.
2011-01-01
The purpose of this "post hoc," summative evaluation was to evaluate the effectiveness of classroom-embedded, individualistic, computer-based learning for middle school students placed at academic risk in schools with a high proportion of Title I eligible students. Data were mined from existing school district databases. For data (n = 393)…
Massey, Scott; Stallman, John; Lee, Louise; Klingaman, Kathy; Holmerud, David
2011-01-01
This paper describes how a systematic analysis of students at risk for failing the Physician Assistant National Certifying Examination (PANCE) may be used to identify which students may benefit from intervention prior to taking the PANCE and thus increase the likelihood of successful completion of the PANCE. The intervention developed and implemented uses various formative and summative examinations to predict students' PANCE scores with a high degree of accuracy. Eight end-of-rotation exams (EOREs) based upon discipline-specific diseases and averaging 100 questions each, a 360-question PANCE simulation (SUMM I), the PACKRAT, and a 700-question summative cognitive examination based upon the NCCPA blueprint (SUMM II) were administered to all students enrolled in the program during the clinical year starting in January 2010 and concluding in December 2010. When the PACKRAT, SUMM I, SUMM II, and the surgery, women's health, and pediatrics EOREs were combined in a regression model, an Rvalue of 0.87 and an R2 of 0.75 were obtained. A predicted score was generated for the class of 2009. The predicted PANCE score based upon this model had a final correlation of 0.790 with the actual PANCE score. This pilot study demonstrated that valid predicted scores could be generated from formative and summative examinations to provide valuable feedback and to identify students at risk of failing the PANCE.
A brief simulation intervention increasing basic science and clinical knowledge.
Sheakley, Maria L; Gilbert, Gregory E; Leighton, Kim; Hall, Maureen; Callender, Diana; Pederson, David
2016-01-01
The United States Medical Licensing Examination (USMLE) is increasing clinical content on the Step 1 exam; thus, inclusion of clinical applications within the basic science curriculum is crucial. Including simulation activities during basic science years bridges the knowledge gap between basic science content and clinical application. To evaluate the effects of a one-off, 1-hour cardiovascular simulation intervention on a summative assessment after adjusting for relevant demographic and academic predictors. This study was a non-randomized study using historical controls to evaluate curricular change. The control group received lecture (n l=515) and the intervention group received lecture plus a simulation exercise (n l+s=1,066). Assessment included summative exam questions (n=4) that were scored as pass/fail (≥75%). USMLE-style assessment questions were identical for both cohorts. Descriptive statistics for variables are presented and odds of passage calculated using logistic regression. Undergraduate grade point ratio, MCAT-BS, MCAT-PS, age, attendance at an academic review program, and gender were significant predictors of summative exam passage. Students receiving the intervention were significantly more likely to pass the summative exam than students receiving lecture only (P=0.0003). Simulation plus lecture increases short-term understanding as tested by a written exam. A longitudinal study is needed to assess the effect of a brief simulation intervention on long-term retention of clinical concepts in a basic science curriculum.
Object detection in natural backgrounds predicted by discrimination performance and models
NASA Technical Reports Server (NTRS)
Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.
The Dalgarno-Lewis summation technique: Some comments and examples
NASA Astrophysics Data System (ADS)
Mavromatis, Harry A.
1991-08-01
The Dalgarno-Lewis technique [A. Dalgarno and J. T. Lewis, ``The exact calculation of long-range forces between atoms by perturbation theory,'' Proc. R. Soc. London Ser. A 233, 70-74 (1955)] provides an elegant method to obtain exact results for various orders in perturbation theory, while avoiding the infinite sums which arise in each order. In the present paper this technique, which perhaps has not been exploited as much as it could be, is first reviewed with attention to some of its not-so-straightforward details, and then six examples of the method are given using three different one-dimensional bases.
Plenoptic image watermarking to preserve copyright
NASA Astrophysics Data System (ADS)
Ansari, A.; Dorado, A.; Saavedra, G.; Martinez Corral, M.
2017-05-01
Common camera loses a huge amount of information obtainable from scene as it does not record the value of individual rays passing a point and it merely keeps the summation of intensities of all the rays passing a point. Plenoptic images can be exploited to provide a 3D representation of the scene and watermarking such images can be helpful to protect the ownership of these images. In this paper we propose a method for watermarking the plenoptic images to achieve this aim. The performance of the proposed method is validated by experimental results and a compromise is held between imperceptibility and robustness.
Evaluation of lattice sums by the Poisson sum formula
NASA Technical Reports Server (NTRS)
Ray, R. D.
1975-01-01
The Poisson sum formula was applied to the problem of summing pairwise interactions between an observer molecule and a semi-infinite regular array of solid state molecules. The transformed sum is often much more rapidly convergent than the original sum, and forms a Fourier series in the solid surface coordinates. The method is applicable to a variety of solid state structures and functional forms of the pairwise potential. As an illustration of the method, the electric field above the (100) face of the CsCl structure is calculated and compared to earlier results obtained by direct summation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Zhen, X; Zhou, L
2014-06-15
Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, themore » algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less
Long-range interactions and parallel scalability in molecular simulations
NASA Astrophysics Data System (ADS)
Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko
2007-01-01
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.
Yoneda, Shigetaka; Sugawara, Yoko; Urabe, Hisako
2005-01-27
The dynamics of crystal water molecules of guanosine dihydrate are investigated in detail by molecular dynamics (MD) simulation. A 2 ns simulation is performed using a periodic boundary box composed of 4 x 5 x 8 crystallographic unit cells and using the particle-mesh Ewald method for calculation of electrostatic energy. The simulated average atomic positions and atomic displacement parameters are remarkably coincident with the experimental values determined by X-ray analysis, confirming the high accuracy of this simulation. The dynamics of crystal water are analyzed in terms of atomic displacement parameters, orientation vectors, order parameters, self-correlation functions of the orientation vectors, time profiles of hydrogen-bonding probability, and translocations. The simulation clarifies that the average structure is composed of various stable and transient structures of the molecules. The simulated guanosine crystal forms a layered structure, with four water sites per asymmetric unit, classified as either interlayer water or intralayer water. From a detailed analysis of the translocations of water molecules in the simulation, columns of intralayer water molecules along the c axis appear to represent a pathway for hydration and dehydration by a kind of molecular valve mechanism.
CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.
Instructor perspectives of multiple-choice questions in summative assessment for novice programmers
NASA Astrophysics Data System (ADS)
Shuhidan, Shuhaida; Hamilton, Margaret; D'Souza, Daryl
2010-09-01
Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with the latter typically represented by a final examination. Preparation of such assessment is driven by instructor perceptions of student learning of programming concepts. This in turn may yield instructor perspectives of summative assessment that do not necessarily correlate with student expectations or abilities. In this article, we present results of our study around instructor perspectives of summative assessment for novice programmers. Both quantitative and qualitative data have been obtained via survey responses from programming instructors with varying teaching experience, and from novice student responses to targeted examination questions. Our findings highlight that most of the instructors believed that summative assessment is, and is meant to be, a valid measure of a student's ability to program. Most instructors further believed that Multiple-choice Questions (MCQs) provide a means of testing a low level of understanding, and a few added qualitative comments to suggest that MCQs are easy questions, and others refused to use them at all. There was no agreement around the proposition that if a question was designed to test a low level of skill, or a low level in a hierarchy of a body of knowledge, that such a question should or would be found to be easy by the student. To aid our analysis of assessment questions, we introduced four measures: Syntax Knowledge; Semantic Knowledge; Problem Solving Skill and the Level of Difficulty of the Problem. We applied these measures to selected examination questions, and have identified gaps between the instructor perspectives of what is considered to be an easy question and also in what is required to be assessed to determine whether students have achieved the goals of their course.
Summation of the product of certain functions and generalized Fibonacci numbers
NASA Astrophysics Data System (ADS)
Chong, Chin-Yoon; Ang, Siew-Ling; Ho, C. K.
2014-12-01
In this paper, we derived the summation
NASA Astrophysics Data System (ADS)
Obeidat, Abdalla; Jaradat, Adnan; Hamdan, Bushra; Abu-Ghazleh, Hind
2018-04-01
The best spherical cutoff radius, long range interaction and temperature controller were determined using surface tension, density, and diffusion coefficients of van Leeuwen and Smit methanol. A quite good range of cutoff radii from 0.75 to 1.45 nm has been studied on Coulomb cut-off and particle mesh Ewald (PME) long range interaction to determine the best cutoff radius and best long range interaction as well for four sets of temperature: 200, 230, 270 and 300 K. To determine the best temperature controller, the cutoff radius of 1.25 nm was fixed using PME long range interaction on calculating the above properties at low temperature range: 200-300 K.
XRayView: a teaching aid for X-ray crystallography.
Phillips, G N
1995-10-01
A software package, XRayView, has been developed that uses interactive computer graphics to introduce basic concepts of x-ray diffraction by crystals, including the reciprocal lattice, the Ewald sphere construction, Laue cones, the wavelength dependence of the reciprocal lattice, primitive and centered lattices and systematic extinctions, rotation photography. Laue photography, space group determination and Laue group symmetry, and the alignment of crystals by examination of reciprocal space. XRayView is designed with "user-friendliness" in mind, using pull-down menus to control the program. Many of the experiences of using real x-ray diffraction equipment to examine crystalline diffraction can be simulated. Exercises are available on-line to guide the users through many typical x-ray diffraction experiments.
Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P
2014-10-30
Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.
Yang, Yong-Qiang; Li, Xue-Bo; Shao, Ru-Yue; Lyu, Zhou; Li, Hong-Wei; Li, Gen-Ping; Xu, Lyu-Zi; Wan, Li-Hua
2016-09-01
The characteristic life stages of infesting blowflies (Calliphoridae) such as Chrysomya megacephala (Fabricius) are powerful evidence for estimating the death time of a corpse, but an established reference of developmental times for local blowfly species is required. We determined the developmental rates of C. megacephala from southwest China at seven constant temperatures (16-34°C). Isomegalen and isomorphen diagrams were constructed based on the larval length and time for each developmental event (first ecdysis, second ecdysis, wandering, pupariation, and eclosion), at each temperature. A thermal summation model was constructed by estimating the developmental threshold temperature D0 and the thermal summation constant K. The thermal summation model indicated that, for complete development from egg hatching to eclosion, D0 = 9.07 ± 0.54°C and K = 3991.07 ± 187.26 h °C. This reference can increase the accuracy of estimations of postmortem intervals in China by predicting the growth of C. megacephala. © 2016 American Academy of Forensic Sciences.
Derivatives of Horn hypergeometric functions with respect to their parameters
NASA Astrophysics Data System (ADS)
Ancarani, L. U.; Del Punta, J. A.; Gasaneo, G.
2017-07-01
The derivatives of eight Horn hypergeometric functions [four Appell F1, F2, F3, and F4, and four (degenerate) confluent Φ1, Φ2, Ψ1, and Ξ1] with respect to their parameters are studied. The first derivatives are expressed, systematically, as triple infinite summations or, alternatively, as single summations of two-variable Kampé de Fériet functions. Taking advantage of previously established expressions for the derivative of the confluent or Gaussian hypergeometric functions, the generalization to the nth derivative of Horn's functions with respect to their parameters is rather straightforward in most cases; the results are expressed in terms of n + 2 infinite summations. Following a similar procedure, mixed derivatives are also treated. An illustration of the usefulness of the derivatives of F1, with respect to the first and third parameters, is given with the study of autoionization of atoms occurring as part of a post-collisional process. Their evaluation setting the Coulomb charge to zero provides the coefficients of a Born-like expansion of the interaction.
Hattori, Toshiaki; Nakata, Yasuko; Kato, Ryo
2003-11-01
The biguanide concentration of polyhexamethylene biguanide hydrochloride (PHMB-HCl) was measured by non-aqueous titration with HClO4, argentometric titration, the Kjeldhal method, and colloidal titration. The summation value of non-aqueous titration and argentometric titration corresponded to two titrable nitrogens in five nitrogens per one unit of PHMB-HCl, and consisted with the result of the Kjeldhal method to the five nitrogens. The colloidal titration of PHMB-HCl at pH 2.05 was equal to that with the two nitrogens. The relative standard deviations of non-aqueous titration, argentometric titration, the Kjeldhal method, and colloidal titration were 0.50% for 8 runs, 0.13% for 7 runs, 3.61% for 6 runs, and 0.69% for 6 runs, respectively.
A Qualitative Evaluation of an Online Expert-Facilitated Course on Tobacco Dependence Treatment.
Ebn Ahmady, Arezoo; Barker, Megan; Dragonetti, Rosa; Fahim, Myra; Selby, Peter
2017-01-01
Qualitative evaluations of courses prove difficult due to low response rates. Online courses may permit the analysis of qualitative feedback provided by health care providers (HCPs) during and after the course is completed. This study describes the use of qualitative methods for an online continuing medical education (CME) course through the analysis of HCP feedback for the purpose of quality improvement. We used formative and summative feedback from HCPs about their self-reported experiences of completing an online expert-facilitated course on tobacco dependence treatment (the Training Enhancement in Applied Cessation Counselling and Health [TEACH] Project). Phenomenological, inductive, and deductive approaches were applied to develop themes. QSR NVivo 11 was used to analyze the themes derived from free-text comments and responses to open-ended questions. A total of 277 out of 287 participants (96.5%) completed the course evaluations and provided 690 comments focused on how to improve the program. Five themes emerged from the formative evaluations: overall quality, content, delivery method, support, and time. The majority of comments (22.6%) in the formative evaluation expressed satisfaction with overall course quality. Suggestions for improvement were mostly for course content and delivery method (20.4% and 17.8%, respectively). Five themes emerged from the summative evaluation: feedback related to learning objectives, interprofessional collaboration, future topics of relevance, overall modifications, and overall satisfaction. Comments on course content, website function, timing, and support were the identified areas for improvement. This study provides a model to evaluate the effectiveness of online educational interventions. Significantly, this constructive approach to evaluation allows CME providers to take rapid corrective action.
Harvest prediction in `Algerie' loquat
NASA Astrophysics Data System (ADS)
Hueso, Juan J.; Pérez, Mercedes; Alonso, Francisca; Cuevas, Julián
2007-05-01
Plant phenology is in great measure driven by air temperature. To forecast harvest time for ‘Algerie’ loquat accurately, the growing degree days (GDD) needed from bloom to ripening were determined using data from nine seasons. The methods proposed by Zalom et al. (Zalom FG, Goodell PB, Wilson LT, Barnett WW, Bentley W, Degree-days: the calculation and use of heat units in pest management, leaflet no 21373, Division Agriculture and Natural Resources, University of California 10 pp, 1983) were compared as regards their ability to estimate heat summation based on hourly records. All the methods gave remarkably similar results for our cultivation area, although the double-sine method showed higher performance when temperatures were low. A base temperature of 3°C is proposed for ‘Algerie’ loquat because it provides a coefficient of variation in GDD among seasons of below 5%, and because of its compatibility with loquat growth. Based on these determinations, ‘Algerie’ loquat requires 1,715 GDD from bloom to harvest; under our conditions this heat is accumulated over an average of 159 days. Our procedure permits the ‘Algerie’ harvest date to be estimated with a mean error of 4.4 days (<3% for the bloom-harvest period). GDD summation did not prove superior to the use of the number of calendar days for predicting ‘Algerie’ harvest under non-limiting growing conditions. However, GDD reflects the developmental rate in water-stressed trees better than calendar days. Trees under deficit irrigation during flower development required more time and more heat to ripen their fruits.
A Qualitative Evaluation of an Online Expert-Facilitated Course on Tobacco Dependence Treatment
Ebn Ahmady, Arezoo; Barker, Megan; Dragonetti, Rosa; Fahim, Myra; Selby, Peter
2017-01-01
Qualitative evaluations of courses prove difficult due to low response rates. Online courses may permit the analysis of qualitative feedback provided by health care providers (HCPs) during and after the course is completed. This study describes the use of qualitative methods for an online continuing medical education (CME) course through the analysis of HCP feedback for the purpose of quality improvement. We used formative and summative feedback from HCPs about their self-reported experiences of completing an online expert-facilitated course on tobacco dependence treatment (the Training Enhancement in Applied Cessation Counselling and Health [TEACH] Project). Phenomenological, inductive, and deductive approaches were applied to develop themes. QSR NVivo 11 was used to analyze the themes derived from free-text comments and responses to open-ended questions. A total of 277 out of 287 participants (96.5%) completed the course evaluations and provided 690 comments focused on how to improve the program. Five themes emerged from the formative evaluations: overall quality, content, delivery method, support, and time. The majority of comments (22.6%) in the formative evaluation expressed satisfaction with overall course quality. Suggestions for improvement were mostly for course content and delivery method (20.4% and 17.8%, respectively). Five themes emerged from the summative evaluation: feedback related to learning objectives, interprofessional collaboration, future topics of relevance, overall modifications, and overall satisfaction. Comments on course content, website function, timing, and support were the identified areas for improvement. This study provides a model to evaluate the effectiveness of online educational interventions. Significantly, this constructive approach to evaluation allows CME providers to take rapid corrective action. PMID:28992759
Audio-visual speech cue combination.
Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick
2010-04-16
Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.
Dipole and quadrupole synthesis of electric potential fields. M.S. Thesis
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1979-01-01
A general technique for expanding an unknown potential field in terms of a linear summation of weighted dipole or quadrupole fields is described. Computational methods were developed for the iterative addition of dipole fields. Various solution potentials were compared inside the boundary with a more precise calculation of the potential to derive optimal schemes for locating the singularities of the dipole fields. Then, the problem of determining solutions to Laplace's equation on an unbounded domain as constrained by pertinent electron trajectory data was considered.
Falling coupled oscillators and trigonometric sums
NASA Astrophysics Data System (ADS)
Holcombe, S. R.
2018-02-01
A method for evaluating finite trigonometric summations is applied to a system of N coupled oscillators under acceleration. Initial motion of the nth particle is shown to be of the order T^{2{n}+2} for small time T, and the end particle in the continuum limit is shown to initially remain stationary for the time it takes a wavefront to reach it. The average velocities of particles at the ends of the system are shown to take discrete values in a step-like manner.
Computation of the radiation amplitude of oscillons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fodor, Gyula; Forgacs, Peter; LMPT, CNRS-UMR 6083, Universite de Tours, Parc de Grandmont, 37200 Tours
2009-03-15
The radiation loss of small-amplitude oscillons (very long-living, spatially localized, time-dependent solutions) in one-dimensional scalar field theories is computed in the small-amplitude expansion analytically using matched asymptotic series expansions and Borel summation. The amplitude of the radiation is beyond all orders in perturbation theory and the method used has been developed by Segur and Kruskal in Phys. Rev. Lett. 58, 747 (1987). Our results are in good agreement with those of long-time numerical simulations of oscillons.
Migration of dispersive GPR data
Powers, M.H.; Oden, C.P.; ,
2004-01-01
Electrical conductivity and dielectric and magnetic relaxation phenomena cause electromagnetic propagation to be dispersive in earth materials. Both velocity and attenuation may vary with frequency, depending on the frequency content of the propagating energy and the nature of the relaxation phenomena. A minor amount of velocity dispersion is associated with high attenuation. For this reason, measuring effects of velocity dispersion in ground penetrating radar (GPR) data is difficult. With a dispersive forward model, GPR responses to propagation through materials with known frequency-dependent properties have been created. These responses are used as test data for migration algorithms that have been modified to handle specific aspects of dispersive media. When either Stolt or Gazdag migration methods are modified to correct for just velocity dispersion, the results are little changed from standard migration. For nondispersive propagating wavefield data, like deep seismic, ensuring correct phase summation in a migration algorithm is more important than correctly handling amplitude. However, the results of migrating model responses to dispersive media with modified algorithms indicate that, in this case, correcting for frequency-dependent amplitude loss has a much greater effect on the result than correcting for proper phase summation. A modified migration is only effective when it includes attenuation recovery, performing deconvolution and migration simultaneously.
NASA Astrophysics Data System (ADS)
Neis, P. D.; Ferreira, N. F.; Poletto, J. C.; Matozo, L. T.; Masotti, D.
2016-05-01
This paper describes the development of a methodology for assessing and correlating stick-slip and brake creep groan. For doing that, results of tribotests are compared to data obtained in vehicle tests. A low velocity and a linear reduction in normal force were set for the tribotests. The vehicle tests consisted of subjecting a sport utility vehicle to three different ramp slopes. Creep groan events were measured by accelerometers placed on the brake calipers. The root mean square of the acceleration signal (QRMS parameter) was shown to be able to measure the creep groan severity resulting from the vehicle tests. Differences in QRMS were observed between front-rear and left-right wheels for all tested materials. Frequency spectrum analysis of the acceleration revealed that the wheel side and material type do not cause any significant shift in the creep groan frequency. QRMS measured in the vehicle tests presented good correlation with slip power (SP) summation. For this reason, SP summation may represent the "creep groan propensity" of brake materials. Thus, the proposed tribotest method can be utilized to predict the creep groan severity of brake materials in service.
StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.
2018-05-01
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Comparing strategies to assess multiple behavior change in behavioral intervention studies.
Drake, Bettina F; Quintiliani, Lisa M; Sapp, Amy L; Li, Yi; Harley, Amy E; Emmons, Karen M; Sorensen, Glorian
2013-03-01
Alternatives to individual behavior change methods have been proposed, however, little has been done to investigate how these methods compare. To explore four methods that quantify change in multiple risk behaviors targeting four common behaviors. We utilized data from two cluster-randomized, multiple behavior change trials conducted in two settings: small businesses and health centers. Methods used were: (1) summative; (2) z-score; (3) optimal linear combination; and (4) impact score. In the Small Business study, methods 2 and 3 revealed similar outcomes. However, physical activity did not contribute to method 3. In the Health Centers study, similar results were found with each of the methods. Multivitamin intake contributed significantly more to each of the summary measures than other behaviors. Selection of methods to assess multiple behavior change in intervention trials must consider study design, and the targeted population when determining the appropriate method/s to use.
A Phase Field Model of Deformation Twinning: Nonlinear Theory and Numerical Simulations
2011-03-01
the system. Concepts for mod- eling multiphase systems were advanced by Steinbach et al. [3] and Steinbach and Apel [4]. Fried and Gurtin [5] and...a3b3 in a three- dimensional vector space. The outer product is (a ⊗ b) AB = aAbB. Juxtaposition implies summation over one set of adjacent indices...e.g., ( AB ) AB = AACBCB. The colon denotes summation over two sets of indices; e.g., A : B = AABBAB and (C : E) AB = CABCDECD. The transpose of amatrix is
Visual response time to colored stimuli in peripheral retina - Evidence for binocular summation
NASA Technical Reports Server (NTRS)
Haines, R. F.
1977-01-01
Simple onset response time (RT) experiments, previously shown to exhibit binocular summation effects for white stimuli along the horizontal meridian, were performed for red and green stimuli along 5 oblique meridians. Binocular RT was significantly shorter than monocular RT for a 45-min-diameter spot of red, green, or white light within eccentricities of about 50 deg from the fovea. Relatively large meridian differences were noted that appear to be due to the degree to which the images fall on corresponding retinal areas.
Las Palmeras Molecular Dynamics: A flexible and modular molecular dynamics code
NASA Astrophysics Data System (ADS)
Davis, Sergio; Loyola, Claudia; González, Felipe; Peralta, Joaquín
2010-12-01
Las Palmeras Molecular Dynamics (LPMD) is a highly modular and extensible molecular dynamics (MD) code using interatomic potential functions. LPMD is able to perform equilibrium MD simulations of bulk crystalline solids, amorphous solids and liquids, as well as non-equilibrium MD (NEMD) simulations such as shock wave propagation, projectile impacts, cluster collisions, shearing, deformation under load, heat conduction, heterogeneous melting, among others, which involve unusual MD features like non-moving atoms and walls, unstoppable atoms with constant-velocity, and external forces like electric fields. LPMD is written in C++ as a compromise between efficiency and clarity of design, and its architecture is based on separate components or plug-ins, implemented as modules which are loaded on demand at runtime. The advantage of this architecture is the ability to completely link together the desired components involved in the simulation in different ways at runtime, using a user-friendly control file language which describes the simulation work-flow. As an added bonus, the plug-in API (Application Programming Interface) makes it possible to use the LPMD components to analyze data coming from other simulation packages, convert between input file formats, apply different transformations to saved MD atomic trajectories, and visualize dynamical processes either in real-time or as a post-processing step. Individual components, such as a new potential function, a new integrator, a new file format, new properties to calculate, new real-time visualizers, and even a new algorithm for handling neighbor lists can be easily coded, compiled and tested within LPMD by virtue of its object-oriented API, without the need to modify the rest of the code. LPMD includes already several pair potential functions such as Lennard-Jones, Morse, Buckingham, MCY and the harmonic potential, as well as embedded-atom model (EAM) functions such as the Sutton-Chen and Gupta potentials. Integrators to choose include Euler (if only for demonstration purposes), Verlet and Velocity Verlet, Leapfrog and Beeman, among others. Electrostatic forces are treated as another potential function, by default using the plug-in implementing the Ewald summation method. Program summaryProgram title: LPMD Catalogue identifier: AEHG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 509 490 No. of bytes in distributed program, including test data, etc.: 6 814 754 Distribution format: tar.gz Programming language: C++ Computer: 32-bit and 64-bit workstation Operating system: UNIX RAM: Minimum 1024 bytes Classification: 7.7 External routines: zlib, OpenGL Nature of problem: Study of Statistical Mechanics and Thermodynamics of condensed matter systems, as well as kinetics of non-equilibrium processes in the same systems. Solution method: Equilibrium and non-equilibrium molecular dynamics method, Monte Carlo methods. Restrictions: Rigid molecules are not supported. Polarizable atoms and chemical bonds (proteins) either. Unusual features: The program is able to change the temperature of the simulation cell, the pressure, cut regions of the cell, color the atoms by properties, even during the simulation. It is also possible to fix the positions and/or velocity of groups of atoms. Visualization of atoms and some physical properties during the simulation. Additional comments: The program does not only perform molecular dynamics and Monte Carlo simulations, it is also able to filter and manipulate atomic configurations, read and write different file formats, convert between them, evaluate different structural and dynamical properties. Running time: 50 seconds on a 1000-step simulation of 4000 argon atoms, running on a single 2.67 GHz Intel processor.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Drake-Lee, A B; Skinner, D; Hawthorne, M; Clarke, R
2009-10-01
'High stakes' postgraduate medical examinations should conform to current educational standards. In the UK and Ireland, national assessments in surgery are devised and managed through the examination structure of the Royal Colleges of Surgeons. Their efforts are not reported in the medical education literature. In the current paper, we aim to clarify this process. To replace the clinical section of the Diploma of Otorhinolaryngology with an Objective, Structured, Clinical Examination, and to set the level of the assessment at one year of postgraduate training in the specialty. After 'blueprinting' against the whole curriculum, an Objective, Structured, Clinical Examination comprising 25 stations was divided into six clinical stations and 19 other stations exploring written case histories, instruments, test results, written communication skills and interpretation skills. The pass mark was set using a modified borderline method and other methods, and statistical analysis of the results was performed. The results of nine examinations between May 2004 and May 2008 are presented. The pass mark varied between 68 and 82 per cent. Internal consistency was good, with a Cronbach's alpha value of 0.99 for all examinations and split-half statistics varying from 0.96 to 0.99. Different standard settings gave similar pass marks. We have developed a summative, Objective, Structured, Clinical Examination for doctors training in otorhinolaryngology, reported herein. The objectives and standards of setting a high quality assessment were met.
A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.
Jayachandran, V; Bonilha, M W
2003-03-01
This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.
Burden of disease attributed to ambient air pollution in Thailand: A GIS-based approach
Pinichka, Chayut; Makka, Nuttapat; Sukkumnoed, Decharut; Chariyalertsak, Suwat; Inchai, Puchong
2017-01-01
Background Growing urbanisation and population requiring enhanced electricity generation as well as the increasing numbers of fossil fuel in Thailand pose important challenges to air quality management which impacts on the health of the population. Mortality attributed to ambient air pollution is one of the sustainable development goals (SDGs). We estimated the spatial pattern of mortality burden attributable to selected ambient air pollution in 2009 based on the empirical evidence in Thailand. Methods We estimated the burden of disease attributable to ambient air pollution based on the comparative risk assessment (CRA) framework developed by the World Health Organization (WHO) and the Global Burden of Disease study (GBD). We integrated geographical information systems (GIS)-based exposure assessments into spatial interpolation models to estimate ambient air pollutant concentrations, the population distribution of exposure and the concentration-response (CR) relationship to quantify ambient air pollution exposure and associated mortality. We obtained air quality data from the Pollution Control Department (PCD) of Thailand surface air pollution monitoring network sources and estimated the CR relationship between relative risk (RR) and concentration of air pollutants from the epidemiological literature. Results We estimated 650–38,410 ambient air pollution-related fatalities and 160–5,982 fatalities that could have been avoided with a 20 reduction in ambient air pollutant concentrations. The summation of population-attributable fraction (PAF) of the disease burden for all-causes mortality in adults due to NO2 and PM2.5 were the highest among all air pollutants at 10% and 7.5%, respectively. The PAF summation of PM2.5 for lung cancer and cardiovascular disease were 16.8% and 14.6% respectively and the PAF summations of mortality attributable to PM10 was 3.4% for all-causes mortality, 1.7% for respiratory and 3.8% for cardiovascular mortality, while the PAF summation of mortality attributable to NO2 was 7.8% for respiratory mortality in Thailand. Conclusion Mortality due to ambient air pollution in Thailand varies across the country. Geographical distribution estimates can identify high exposure areas for planners and policy-makers. Our results suggest that the benefits of a 20% reduction in ambient air pollution concentration could prevent up to 25% of avoidable fatalities each year in all-causes, respiratory and cardiovascular categories. Furthermore, our findings can provide guidelines for future epidemiological investigations and policy decisions to achieve the SDGs. PMID:29267319
Assessment of polychlorinated biphenyls and polybrominated diphenyl ethers in Tibetan butter.
Wang, Yawei; Yang, Ruiqiang; Wang, Thanh; Zhang, Qinghua; Li, Yingming; Jiang, Guibin
2010-02-01
The Tibetan plateau is considered a potential cold trap for persistent organic pollutants (POPs) and plays an important role in the global long-range transport of these compounds. This present work surveyed the concentrations of polychlorinated biphenyl (PCBs) and polybrominated diphenyl ethers (PBDEs) in Tibetan butter samples collected from different prefectures in Tibet autonomous region (TAR). summation operator(25)PCB concentrations ranged from 137 to 2518 pg g(-1) with a mean value 519 pg g(-1), which were far lower than those in the butter from other regions in the world. The highest level was found in butter from Sichuan province, which is located to the east of the Tibetan plateau and the lowest value was in samples from southeast TAR. The average concentration of summation Sigma(12)PBDE was 125 pg g(-1). The sample with highest and lowest summation Sigma(12)PBDE concentration (955 and 18.0 pg g(-1)) was from the south and southeast part of the plateau, respectively. Back trajectory model implied that the sources of these two groups of POPs were by atmospheric deposition in south, whereas the western plateau was mainly influenced by the tropical monsoon from south Asia. Air currents from Sichuan and Gansu province are further responsible for the atmospheric transport of PCBs and PBDEs to the eastern and northern side of the plateau. Local air concentrations of summation Sigma(5)PCBs predicted using air-milk transfer factor were at the lower end of published global levels. Copyright 2009 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagerwaard, Frank J.; Hoorn, Elles A.P. van der; Verbakel, Wilko
2009-09-01
Purpose: Volumetric modulated arc therapy (RapidArc [RA]; Varian Medical Systems, Palo Alto, CA) allows for the generation of intensity-modulated dose distributions by use of a single gantry rotation. We used RA to plan and deliver whole-brain radiotherapy (WBRT) with a simultaneous integrated boost in patients with multiple brain metastases. Methods and Materials: Composite RA plans were generated for 8 patients, consisting of WBRT (20 Gy in 5 fractions) with an integrated boost, also 20 Gy in 5 fractions, to Brain metastases, and clinically delivered in 3 patients. Summated gross tumor volumes were 1.0 to 37.5 cm{sup 3}. RA plans weremore » measured in a solid water phantom by use of Gafchromic films (International Specialty Products, Wayne, NJ). Results: Composite RA plans could be generated within 1 hour. Two arcs were needed to deliver the mean of 1,600 monitor units with a mean 'beam-on' time of 180 seconds. RA plans showed excellent coverage of planning target volume for WBRT and planning target volume for the boost, with mean volumes receiving at least 95% of the prescribed dose of 100% and 99.8%, respectively. The mean conformity index was 1.36. Composite plans showed much steeper dose gradients outside Brain metastases than plans with a conventional summation of WBRT and radiosurgery. Comparison of calculated and measured doses showed a mean gamma for double-arc plans of 0.30, and the area with a gamma larger than 1 was 2%. In-room times for clinical RA sessions were approximately 20 minutes for each patient. Conclusions: RA treatment planning and delivery of integrated plans of WBRT and boosts to multiple brain metastases is a rapid and accurate technique that has a higher conformity index than conventional summation of WBRT and radiosurgery boost.« less
Dual stage beamforming in the absence of front-end receive focusing
NASA Astrophysics Data System (ADS)
Bera, Deep; Bosch, Johan G.; Verweij, Martin D.; de Jong, Nico; Vos, Hendrik J.
2017-08-01
Ultrasound front-end receive designs for miniature, wireless, and/or matrix transducers can be simplified considerably by direct-element summation in receive. In this paper we develop a dual-stage beamforming technique that is able to produce a high-quality image from scanlines that are produced with focused transmit, and simple summation in receive (no delays). We call this non-delayed sequential beamforming (NDSB). In the first stage, low-resolution RF scanlines are formed by simple summation of element signals from a running sub-aperture. In the second stage, delay-and-sum beamforming is performed in which the delays are calculated considering the transmit focal points as virtual sources emitting spherical waves, and the sub-apertures as large unfocused receive elements. The NDSB method is validated with simulations in Field II. For experimental validation, RF channel data were acquired with a commercial research scanner using a 5 MHz linear array, and were subsequently processed offline. For NDSB, good average lateral resolution (0.99 mm) and low grating lobe levels (<-40 dB) were achieved by choosing the transmit {{F}\\#} as 0.75 and the transmit focus at 15 mm. NDSB was compared with conventional dynamic receive focusing (DRF) and synthetic aperture sequential beamforming (SASB) with their own respective optimal settings. The full width at half maximum of the NDSB point spread function was on average 20% smaller than that of DRF except for at depths <30 mm and 10% larger than SASB considering all the depths. NDSB showed only a minor degradation in contrast-to-noise ratio and contrast ratio compared to DRF and SASB when measured on an anechoic cyst embedded in a tissue-mimicking phantom. In conclusion, using simple receive electronics front-end, NDSB can attain an image quality better than DRF and slightly inferior to SASB.
Cue combination in a combined feature contrast detection and figure identification task.
Meinhardt, Günter; Persike, Malte; Mesenholl, Björn; Hagemann, Cordula
2006-11-01
Target figures defined by feature contrast in spatial frequency, orientation or both cues had to be detected in Gabor random fields and their shape had to be identified in a dual task paradigm. Performance improved with increasing feature contrast and was strongly correlated among both tasks. Subjects performed significantly better with combined cues than with single cues. The improvement due to cue summation was stronger than predicted by the assumption of independent feature specific mechanisms, and increased with the performance level achieved with single cues until it was limited by ceiling effects. Further, cue summation was also strongly correlated among tasks: when there was benefit due to the additional cue in feature contrast detection, there was also benefit in figure identification. For the same performance level achieved with single cues, cue summation was generally larger in figure identification than in feature contrast detection, indicating more benefit when processes of shape and surface formation are involved. Our results suggest that cue combination improves spatial form completion and figure-ground segregation in noisy environments, and therefore leads to more stable object vision.