Hamiltonian lattice field theory: Computer calculations using variational methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zako, Robert L.
1991-12-03
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less
First order error corrections in common introductory physics experiments
NASA Astrophysics Data System (ADS)
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Modification of Poisson Distribution in Radioactive Particle Counting.
ERIC Educational Resources Information Center
Drotter, Michael T.
This paper focuses on radioactive practicle counting statistics in laboratory and field applications, intended to aid the Health Physics technician's understanding of the effect of indeterminant errors on radioactive particle counting. It indicates that although the statistical analysis of radioactive disintegration is best described by a Poisson…
Possibilities: A framework for modeling students' deductive reasoning in physics
NASA Astrophysics Data System (ADS)
Gaffney, Jonathan David Housley
Students often make errors when trying to solve qualitative or conceptual physics problems, and while many successful instructional interventions have been generated to prevent such errors, the process of deduction that students use when solving physics problems has not been thoroughly studied. In an effort to better understand that reasoning process, I have developed a new framework, which is based on the mental models framework in psychology championed by P. N. Johnson-Laird. My new framework models how students search possibility space when thinking about conceptual physics problems and suggests that errors arise from failing to flesh out all possibilities. It further suggests that instructional interventions should focus on making apparent those possibilities, as well as all physical consequences those possibilities would incur. The possibilities framework emerged from the analysis of data from a unique research project specifically invented for the purpose of understanding how students use deductive reasoning. In the selection task, participants were given a physics problem along with three written possible solutions with the goal of identifying which one of the three possible solutions was correct. Each participant was also asked to identify the errors in the incorrect solutions. For the study presented in this dissertation, participants not only performed the selection task individually on four problems, but they were also placed into groups of two or three and asked to discuss with each other the reasoning they used in making their choices and attempt to reach a consensus about which solution was correct. Finally, those groups were asked to work together to perform the selection task on three new problems. The possibilities framework appropriately models the reasoning that students use, and it makes useful predictions about potentially helpful instructional interventions. The study reported in this dissertation emphasizes the useful insight the possibilities framework provides. For example, this framework allows us to detect subtle differences in students' reasoning errors, even when those errors result in the same final answer. It also illuminates how simply mentioning overlooked quantities can instigate new lines of student reasoning. It allows us to better understand how well-known psychological biases, such as the belief bias, affect the reasoning process by preventing reasoners from fleshing out all of the possibilities. The possibilities framework also allows us to track student discussions about physics, revealing the need for all parties in communication to use the same set of possibilities in the conversations to facilitate successful understanding. The framework also suggests some of the influences that affect how reasoners choose between possible solutions to a given problem. This new framework for understanding how students reason when solving conceptual physics problems opens the door to a significant field of research. The framework itself needs to be further tested and developed, but it provides substantial suggestions for instructional interventions. If we hope to improve student reasoning in physics, the possibilities framework suggests that we are perhaps best served by teaching students how to fully flesh out the possibilities in every situation. This implies that we need to ensure students have a deep understanding of all of the implied possibilities afforded by the fundamental principles that are the cornerstones of the models we teach in physics classes.
NASA Astrophysics Data System (ADS)
Kustusch, Mary Bridget
2011-12-01
Students in introductory physics struggle with vector algebra and with cross product direction in particular. Some have suggested that this may be due to misapplied right-hand rules, but there are few studies that have had the resolution to explore this. Additionally, previous research on student understanding has noted several kinds of representation-dependence of student performance with vector algebra in both physics and non-physics (or math) contexts (e.g. Hawkins et al., 2009; Van Deventer, 2008). Yet with few exceptions (e.g. Scaife and Heckler, 2010), these findings have not been applied to cross product direction questions or the use of right-hand rules. Also, the extensive work in spatial cognition is particularly applicable to cross product direction due to the spatial and kinesthetic nature of the right-hand rule. A synthesis of the literature from these various fields reveals four categories of problem features likely to impact the understanding of cross product direction: (1) the type of reasoning required, (2) the orientation of the vectors, (3) the need for parallel transport, and (4) the physics context and features (or lack thereof). These features formed the basis of the present effort to systematically explore the context-dependence and representation- dependence of student performance on cross product direction questions. This study used a mix of qualitative and quantitative techniques to analyze twenty-seven individual think-aloud interviews. During these interviews, second semester introductory physics students answered 80-100 cross product direction questions in different contexts and with varying problem features. These features were then used as the predictors in regression analyses for correctness and response time. In addition, each problem was coded for the methods used and the errors made to gain a deeper understanding of student behavior and the impact of these features. The results revealed a wide variety of methods (including six different right-hand rules), many different types of errors, and significant context-dependence and representation-dependence for the features mentioned above. Problems that required reasoning backward to find A⃗ (for C⃗=A⃗ xB⃗ ) presented the biggest challenge for students. Participants who recognized the non-commutativity of the cross product would often reverse the order ( B⃗xA⃗ ) on these problems. Also, this error occurred less frequently when a Guess and Check method was used in addition to the right-hand rule. Three different aspects of orientation had a significant impact on performance: (1) the physical discomfort of using a right-hand rule, (2) the plane of the given vectors, and to a lesser extent, (3) the angle between the vectors. One participant was more likely to switch the order of the vectors for the physically awkward orientations than for the physically easy orientations; and there was evidence that some of the difficulty with vector orientations that were not in the xy-plane was due to misinterpretations of the into and out of the page symbols. The impact of both physical discomfort and the plane of the vectors was reduced when participants rotated the paper. Unlike other problem features, the issue of parallel transport did not appear to be nearly as prevalent for cross product direction as it is for vector addition and subtraction. In addition to these findings, this study confirmed earlier findings regarding physics difficulties with magnetic field and magnetic force, such as differences in performance based on the representation of magnetic field (Scaife and Heckler, 2010) and confusion between electric and magnetic fields (Maloney et al., 2001). It also provided evidence of physics difficulties with magnetic field and magnetic force that have been suspected but never explored, specifically the impact of the sign of the charge and the observation location. This study demonstrated that student difficulty with cross product direction is not as simple as misapplied right-hand rules, although this is an issue. Student behavior on cross product direction questions is significantly dependent on both the context of the question and the representation of various problem features. Although more research is necessary, particularly in regard to individual differences, this study represents a significant step forward in our understanding of student difficulties with cross product direction.
Possibilities: A Framework for Modeling Students' Deductive Reasoning in Physics
ERIC Educational Resources Information Center
Gaffney, Jonathan David Housley
2010-01-01
Students often make errors when trying to solve qualitative or conceptual physics problems, and while many successful instructional interventions have been generated to prevent such errors, the process of deduction that students use when solving physics problems has not been thoroughly studied. In an effort to better understand that reasoning…
NASA Technical Reports Server (NTRS)
Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert
2004-01-01
The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.
NASA Astrophysics Data System (ADS)
Schaan, Emmanuel
2017-01-01
I will present two promising ways in which the cosmic microwave background (CMB) sheds light on critical uncertain physics and systematics of the large-scale structure. Shear calibration with CMB lensing: Realizing the full potential of upcoming weak lensing surveys requires an exquisite understanding of the errors in galaxy shape estimation. In particular, such errors lead to a multiplicative bias in the shear, degenerate with the matter density parameter and the amplitude of fluctuations. Its redshift-evolution can hide the true evolution of the growth of structure, which probes dark energy and possible modifications to general relativity. I will show that CMB lensing from a stage 4 experiment (CMB S4) can self-calibrate the shear for an LSST-like optical lensing survey. This holds in the presence of photo-z errors and intrinsic alignment. Evidence for the kinematic Sunyaev-Zel'dovich (kSZ) effect; cluster energetics: Through the kSZ effect, the baryon momentum field is imprinted on the CMB. I will report significant evidence for the kSZ effect from ACTPol and peculiar velocities reconstructed from BOSS. I will present the prospects for constraining cluster gas profiles and energetics from the kSZ effect with SPT-3G, AdvACT and CMB S4. This will provide constraints on galaxy formation and feedback models.
Student difficulties regarding symbolic and graphical representations of vector fields
NASA Astrophysics Data System (ADS)
Bollen, Laurens; van Kampen, Paul; Baily, Charles; Kelly, Mossy; De Cock, Mieke
2017-12-01
The ability to switch between various representations is an invaluable problem-solving skill in physics. In addition, research has shown that using multiple representations can greatly enhance a person's understanding of mathematical and physical concepts. This paper describes a study of student difficulties regarding interpreting, constructing, and switching between representations of vector fields, using both qualitative and quantitative methods. We first identified to what extent students are fluent with the use of field vector plots, field line diagrams, and symbolic expressions of vector fields by conducting individual student interviews and analyzing in-class student activities. Based on those findings, we designed the Vector Field Representations test, a free response assessment tool that has been given to 196 second- and third-year physics, mathematics, and engineering students from four different universities. From the obtained results we gained a comprehensive overview of typical errors that students make when switching between vector field representations. In addition, the study allowed us to determine the relative prevalence of the observed difficulties. Although the results varied greatly between institutions, a general trend revealed that many students struggle with vector addition, fail to recognize the field line density as an indication of the magnitude of the field, confuse characteristics of field lines and equipotential lines, and do not choose the appropriate coordinate system when writing out mathematical expressions of vector fields.
Quantum Approach to Informatics
NASA Astrophysics Data System (ADS)
Stenholm, Stig; Suominen, Kalle-Antti
2005-08-01
An essential overview of quantum information Information, whether inscribed as a mark on a stone tablet or encoded as a magnetic domain on a hard drive, must be stored in a physical object and thus made subject to the laws of physics. Traditionally, information processing such as computation occurred in a framework governed by laws of classical physics. However, information can also be stored and processed using the states of matter described by non-classical quantum theory. Understanding this quantum information, a fundamentally different type of information, has been a major project of physicists and information theorists in recent years, and recent experimental research has started to yield promising results. Quantum Approach to Informatics fills the need for a concise introduction to this burgeoning new field, offering an intuitive approach for readers in both the physics and information science communities, as well as in related fields. Only a basic background in quantum theory is required, and the text keeps the focus on bringing this theory to bear on contemporary informatics. Instead of proofs and other highly formal structures, detailed examples present the material, making this a uniquely accessible introduction to quantum informatics. Topics covered include: * An introduction to quantum information and the qubit * Concepts and methods of quantum theory important for informatics * The application of information concepts to quantum physics * Quantum information processing and computing * Quantum gates * Error correction using quantum-based methods * Physical realizations of quantum computing circuits A helpful and economical resource for understanding this exciting new application of quantum theory to informatics, Quantum Approach to Informatics provides students and researchers in physics and information science, as well as other interested readers with some scientific background, with an essential overview of the field.
Recent Advances in Our Understanding of Nuclear Forces
NASA Astrophysics Data System (ADS)
Machleidt, Ruprecht
2007-05-01
The attempts to find the right (underlying) theory for the nuclear force have a long and stimulating history. Already in 1953, Hans Bethe stated that ``more man-hours have been given to this problem than to any other scientific question in the history of mankind.'' In search for the nature of the nuclear force, the idea of sub-nuclear particles was created which, eventually, generated the field of particle physics. I will review this productive history of hope, error, and desperation. Finally, I will discuss recent ideas which apply the concept of an effective field theory to low-energy QCD. There are indications that this concept may provide the right framework to properly understand the nuclear force. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.NWS07.B2.1
Power counting and Wilsonian renormalization in nuclear effective field theory
NASA Astrophysics Data System (ADS)
Valderrama, Manuel Pavón
2016-05-01
Effective field theories are the most general tool for the description of low energy phenomena. They are universal and systematic: they can be formulated for any low energy systems we can think of and offer a clear guide on how to calculate predictions with reliable error estimates, a feature that is called power counting. These properties can be easily understood in Wilsonian renormalization, in which effective field theories are the low energy renormalization group evolution of a more fundamental — perhaps unknown or unsolvable — high energy theory. In nuclear physics they provide the possibility of a theoretically sound derivation of nuclear forces without having to solve quantum chromodynamics explicitly. However there is the problem of how to organize calculations within nuclear effective field theory: the traditional knowledge about power counting is perturbative but nuclear physics is not. Yet power counting can be derived in Wilsonian renormalization and there is already a fairly good understanding of how to apply these ideas to non-perturbative phenomena and in particular to nuclear physics. Here we review a few of these ideas, explain power counting in two-nucleon scattering and reactions with external probes and hint at how to extend the present analysis beyond the two-body problem.
NASA Astrophysics Data System (ADS)
Inoue, S.; Shiraishi, J.; Takechi, M.; Matsunaga, G.; Isayama, A.; Hayashi, N.; Ide, S.
2017-11-01
An active stabilization effect of a rotating control field against an error field penetration is numerically studied. We have developed a resistive magnetohydrodynamic code ‘AEOLUS-IT’, which can simulate plasma responses to rotating/static external magnetic field. Adopting non-uniform flux coordinates system, the AEOLUS-IT simulation can employ high magnetic Reynolds number condition relevant to present tokamaks. By AEOLUS-IT, we successfully clarified the stabilization mechanism of the control field against the error field penetration. Physical processes of a plasma rotation drive via the control field are demonstrated by the nonlinear simulation, which reveals that the rotation amplitude at a resonant surface is not a monotonic function of the control field frequency, but has an extremum. Consequently, two ‘bifurcated’ frequency ranges of the control field are found for the stabilization of the error field penetration.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
Assessing student understanding of measurement and uncertainty
NASA Astrophysics Data System (ADS)
Jirungnimitsakul, S.; Wattanakasiwich, P.
2017-09-01
The objectives of this study were to develop and assess student understanding of measurement and uncertainty. A test has been adapted and translated from the Laboratory Data Analysis Instrument (LDAI) test, consists of 25 questions focused on three topics including measures of central tendency, experimental errors and uncertainties, and fitting regression lines. The test was evaluated its content validity by three physics experts in teaching physics laboratory. In the pilot study, Thai LDAI was administered to 93 freshmen enrolled in a fundamental physics laboratory course. The final draft of the test was administered to three groups—45 freshmen taking fundamental physics laboratory, 16 sophomores taking intermediated physics laboratory and 21 juniors taking advanced physics laboratory at Chiang Mai University. As results, we found that the freshmen had difficulties in experimental errors and uncertainties. Most students had problems with fitting regression lines. These results will be used to improve teaching and learning physics laboratory for physics students in the department.
AMOEBA 2.0: A physics-first approach to biomolecular simulations
NASA Astrophysics Data System (ADS)
Rackers, Joshua; Ponder, Jay
The goal of the AMOEBA force field project is to use classical physics to understand and predict the nature of interactions between biological molecules. While making significant advances over the past decade, the ultimate goal of predicting binding energies with ``chemical accuracy'' remains elusive. The primary source of this inaccuracy comes from the physics of how molecules interact at short range. For example, despite AMOEBA's advanced treatment of electrostatics, the force field dramatically overpredicts the electrostatic energy of DNA stacking interactions. AMOEBA 2.0 works to correct these errors by including simple, first principles physics-based terms to account for the quantum mechanical nature of these short-range molecular interactions. We have added a charge penetration term that considerably improves the description of electrostatic interactions at short range. We are reformulating the polarization term of AMOEBA in terms of basic physics assertions. And we are reevaluating the van der Waals term to match ab initio energy decompositions. These additions and changes promise to make AMOEBA more predictive. By including more physical detail of the important short-range interactions of biological molecules, we hope to move closer to the ultimate goal of true predictive power.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
NASA Astrophysics Data System (ADS)
La Haye, Rob
2012-09-01
The Magnetohydrodynamic (MHD) Control Workshop with the theme 'Optimizing and Understanding the Role of Coils for Mode Control' was held at General Atomics (20-22 November 2011) following the 2011 APS-DPP Annual Meeting in Salt Lake City, Utah (14-18 November). This was the 16th in the annual series and was organized jointly by Columbia University, General Atomics, Princeton Plasma Physics Laboratory, and the University of Wisconsin-Madison. Program committee participation included representatives from the EU and Japan along with other US laboratory and university institutions. This workshop highlighted the role of applied non-axisymmetric magnetic fields from both internal and external coils for control of MHD stability to achieve high performance fusion plasmas. The application of 3D magnetic field offers control of important elements of equilibrium, stability, and transport. The use of active 3D fields to stabilize global instabilities and to correct magnetic field errors is an established tool for achieving high beta configurations. 3D fields also affect transport and plasma momentum, and are shown to be important for the control of edge localized modes (ELMs), resistive wall modes, and optimized stellarator configurations. The format was similar to previous workshops, including 13 invited talks, 21 contributed talks, and this year there were 2 panel discussions ('Error Field Correction' led by Andrew Cole of Columbia University and 'Application of Coils in General' led by Richard Buttery of General Atomics). Ted Strait of General Atomics also gave a summary of the International Tokamak Physics Activity (ITPA) MHD meeting in Padua, a group for which he is now the leader. In this special section of Plasma Physics and Controlled Fusion (PPCF) is a sample of the presentations at the workshop, which have been subject to the normal refereeing procedures of the journal. They include a review (A Boozer) and an invited talk (R Fitzpatrick) on error fields, an invited on control of neoclassical tearing modes (H van den Brand), and an invited talk (P Zanca) and a contributed talk (E Oloffson) on control of the resistive wall mode kink. These are just representative of the broad spectrum of recent work on stability found posted at the web site (https://fusion.gat.com/conferences/mhd11/). We thank PPCF for continuing to have this special issue section. This was the third time the workshop was held at General Atomics. We thank General Atomics for making the site available for an internationally represented workshop in the new era of heightened security and controls. The next workshop (17th) will be held at Columbia University for the (fourth time) (https://fusion.gat.com/conferences/mhd12/) with the theme of 'Addressing the Disruption Challenge for ITER' to be combined with the Joint US-Japan MHD Workshop with a special session on: 'Fundamentals of 3D Perturbed Equilibrium Control: Present & Beyond'.
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...
2017-01-07
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Space charge enhanced plasma gradient effects on satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.
1991-01-01
It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.
Improving lidar turbulence estimates for wind energy
NASA Astrophysics Data System (ADS)
Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.
2016-09-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.
Improving Lidar Turbulence Estimates for Wind Energy: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer; Clifton, Andrew; Churchfield, Matthew
2016-10-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Improving Lidar Turbulence Estimates for Wind Energy
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; ...
2016-10-03
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Unforced errors and error reduction in tennis
Brody, H
2006-01-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.
2012-04-01
A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.
The Challenge of Grounding Planning in Simulation with an Interactive Model Development Environment
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Frank, Jeremy D.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.
2011-01-01
A principal obstacle to fielding automated planning systems is the difficulty of modeling. Physical systems are modeled conventionally based on specification documents and the modeler's understanding of the system. Thus, the model is developed in a way that is disconnected from the system's actual behavior and is vulnerable to manual error. Another obstacle to fielding planners is testing and validation. For a space mission, generated plans must be validated often by translating them into command sequences that are run in a simulation testbed. Testing in this way is complex and onerous because of the large number of possible plans and states of the spacecraft. Though, if used as a source of domain knowledge, the simulator can ease validation. This paper poses a challenge: to ground planning models in the system physics represented by simulation. A proposed, interactive model development environment illustrates the integration of planning and simulation to meet the challenge. This integration reveals research paths for automated model construction and validation.
Fault and Error Latency Under Real Workload: an Experimental Study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chillarege, Ram
1986-01-01
A practical methodology for the study of fault and error latency is demonstrated under a real workload. This is the first study that measures and quantifies the latency under real workload and fills a major gap in the current understanding of workload-failure relationships. The methodology is based on low level data gathered on a VAX 11/780 during the normal workload conditions of the installation. Fault occurrence is simulated on the data, and the error generation and discovery process is reconstructed to determine latency. The analysis proceeds to combine the low level activity data with high level machine performance data to yield a better understanding of the phenomena. A strong relationship exists between latency and workload and that relationship is quantified. The sampling and reconstruction techniques used are also validated. Error latency in the memory where the operating system resides was studied using data on the physical memory access. Fault latency in the paged section of memory was determined using data from physical memory scans. Error latency in the microcontrol store was studied using data on the microcode access and usage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.
Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less
Lee, Sangyoon; Hu, Xinda; Hua, Hong
2016-05-01
Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Menges, Brian M.
1998-01-01
Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content in four experiments using a total of 38 subjects. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects' age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. These errors apparently arise from the occlusion of the physical background by the optically superimposed virtual objects. But they are modified by subjects' accommodative competence and specific viewing conditions. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are influenced by their relative position when superimposed. The design implications of the findings are discussed in a concluding section.
Liu, Yan; Salvendy, Gavriel
2009-05-01
This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.
The Diagnosis of Error in Histories of Science
NASA Astrophysics Data System (ADS)
Thomas, William
Whether and how to diagnose error in the history of science is a contentious issue. For many scientists, diagnosis is appealing because it allows them to discuss how knowledge can progress most effectively. Many historians disagree. They consider diagnosis inappropriate because it may discard features of past actors' thought that are important to understanding it, and may have even been intellectually productive. Ironically, these historians are apt to diagnose flaws in scientists' histories as proceeding from a misguided desire to idealize scientific method, and from their attendant identification of deviations from the ideal as, ipso facto, a paramount source of error in historical science. While both views have some merit, they should be reconciled if a more harmonious and productive relationship between the disciplines is to prevail. In To Explain the World, Steven Weinberg narrates the slow but definite emergence of what we call science from long traditions of philosophical and mathematical thought. This narrative follows in a historiographical tradition charted by historians such as Alexandre Koyre and Rupert Hall about sixty years ago. It is essentially a history of the emergence of reliable (if fallible) scientific method from more error-prone thought. While some historians such as Steven Shapin view narratives of this type as fundamentally error-prone, I do not view such projects as a priori illegitimate. They are, however, perhaps more difficult than Weinberg supposes. In this presentation, I will focus on two of Weinberg's strong historical claims: that physics became detached from religion as early as the beginning of the eighteenth century, and that physics proved an effective model for placing other fields on scientific grounds. While I disagree with these claims, they represent at most an overestimation of vintage science's interest in discarding theological questions, and an overestimation of that science's ability to function at all reliably.
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
NASA Technical Reports Server (NTRS)
Barraclough, D. R.; Hide, R.; Leaton, B. R.; Lowes, F. J.; Malin, S. R. C.; Wilson, R. L. (Principal Investigator)
1981-01-01
Quiet-day data from MAGSAT were examined for effects which might test the validity of Maxwell's equations. Both external and toroidal fields which might represent a violation of the equations appear to exist, well within the associated errors. The external field might be associated with the ring current, and varies of a time-scale of one day or less. Its orientation is parallel to the geomagnetic dipole. The toriodal field can be confused with an orientation in error (in yaw). It the toroidal field really exists, its can be related to either ionospheric currents, or to toroidal fields in the Earth's core in accordance with Einstein's unified field theory, or to both.
Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno
2012-01-01
Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.
A method to map errors in the deformable registration of 4DCT images1
Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.
2010-01-01
Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288
NASA Astrophysics Data System (ADS)
Mali, V. K.; Kuiry, S. N.
2015-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
NASA Astrophysics Data System (ADS)
Méndez Incera, F. J.; Erikson, L. H.; Ruggiero, P.; Barnard, P.; Camus, P.; Rueda Zamora, A. C.
2014-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
Logical Fallacies and the Abuse of Climate Science: Fire, Water, and Ice
NASA Astrophysics Data System (ADS)
Gleick, P. H.
2012-12-01
Good policy without good science and analysis is unlikely. Good policy with bad science is even more unlikely. Unfortunately, there is a long history of abuse or misuse of science in fields with ideological, religious, or economically controversial policy implications, such as planetary physics during the time of Galileo, the evolution debate, or climate change. Common to these controversies are what are known as "logical fallacies" -- patterns of reasoning that are always -- or at least commonly -- wrong due to a flaw in the structure of the argument that renders the argument invalid. All scientists should understand the nature of logical fallacies in order to (1) avoid making mistakes and reaching unsupported conclusion, (2) help them understand and refute the flaws in arguments made by others, and (3) aid in communicating science to the public. This talk will present a series of logical fallacies often made in the climate science debate, including "arguments from ignorance," "arguments from error," "arguments from misinterpretation," and "cherry picking." Specific examples will be presented in the area of temperature analysis, water resources, and ice dynamics, with a focus on selective use or misuse of data.; "Argument from Error" - an amusing example of a logical fallacy.
Maggiora, Gerald M
2011-08-01
Reductionism is alive and well in drug-discovery research. In that tradition, we continually improve experimental and computational methods for studying smaller and smaller aspects of biological systems. Although significant improvements continue to be made, are our efforts too narrowly focused? Suppose all error could be removed from these methods, would we then understand biological systems sufficiently well to design effective drugs? Currently, almost all drug research focuses on single targets. Should the process be expanded to include multiple targets? Recent efforts in this direction have lead to the emerging field of polypharmacology. This appears to be a move in the right direction, but how much polypharmacology is enough? As the complexity of the processes underlying polypharmacology increase will we be able to understand them and their inter-relationships? Is "new" mathematics unfamiliar in much of physics and chemistry research needed to accomplish this task? A number of these questions will be addressed in this paper, which focuses on issues and questions not answers to the drug-discovery conundrum.
Comparative Physical Education and Sport: The Area Defined.
ERIC Educational Resources Information Center
Howell, Maxwell L.; Howell, Reet A.
The emerging field of comparative physical education and sport, or international physical education and sport, rests squarely on the shoulders of comparative education; an understanding and appreciation of the latter is necessary for an understanding of the former. Comparative education is an older field of study and has gone through certain…
The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?
NASA Astrophysics Data System (ADS)
Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.
2016-01-01
In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.
Source characterization of underground explosions from hydrodynamic-to-elastic coupling simulations
NASA Astrophysics Data System (ADS)
Chiang, A.; Pitarka, A.; Ford, S. R.; Ezzedine, S. M.; Vorobiev, O.
2017-12-01
A major improvement in ground motion simulation capabilities for underground explosion monitoring during the first phase of the Source Physics Experiment (SPE) is the development of a wave propagation solver that can propagate explosion generated non-linear near field ground motions to the far-field. The calculation is done using a hybrid modeling approach with a one-way hydrodynamic-to-elastic coupling in three dimensions where near-field motions are computed using GEODYN-L, a Lagrangian hydrodynamics code, and then passed to WPP, an elastic finite-difference code for seismic waveform modeling. The advancement in ground motion simulation capabilities gives us the opportunity to assess moment tensor inversion of a realistic volumetric source with near-field effects in a controlled setting, where we can evaluate the recovered source properties as a function of modeling parameters (i.e. velocity model) and can provide insights into previous source studies on SPE Phase I chemical shots and other historical nuclear explosions. For example the moment tensor inversion of far-field SPE seismic data demonstrated while vertical motions are well-modeled using existing velocity models large misfits still persist in predicting tangential shear wave motions from explosions. One possible explanation we can explore is errors and uncertainties from the underlying Earth model. Here we investigate the recovered moment tensor solution, particularly on the non-volumetric component, by inverting far-field ground motions simulated from physics-based explosion source models in fractured material, where the physics-based source models are based on the modeling of SPE-4P, SPE-5 and SPE-6 near-field data. The hybrid modeling approach provides new prospects in modeling explosion source and understanding the uncertainties associated with it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
High School Students' Representations and Understandings of Electric Fields
ERIC Educational Resources Information Center
Cao, Ying; Brizuela, Bárbara M.
2016-01-01
This study investigates the representations and understandings of electric fields expressed by Chinese high school students 15 to 16 years old who have not received high school level physics instruction. The physics education research literature has reported students' conceptions of electric fields post-instruction as indicated by students'…
ERIC Educational Resources Information Center
Safadi, Rafi'
2017-01-01
I examined the impact of a self-diagnosis activity on students' conceptual understanding and achievements in physics. This activity requires students to self-diagnose their solutions to problems that they have solved on their own--namely, to identify and explain their errors--and self-score them--that is, assign scores to their solutions--aided by…
Yung, Marcus; Manji, Rahim; Wells, Richard P
2017-11-01
Our aim was to explore the relationship between fatigue and operation system performance during a simulated light precision task over an 8-hr period using a battery of physical (central and peripheral) and cognitive measures. Fatigue may play an important role in the relationship between poor ergonomics and deficits in quality and productivity. However, well-controlled laboratory studies in this area have several limitations, including the lack of work relevance of fatigue exposures and lack of both physical and cognitive measures. There remains a need to understand the relationship between physical and cognitive fatigue and task performance at exposure levels relevant to realistic production or light precision work. Errors and fatigue measures were tracked over the course of a micropipetting task. Fatigue responses from 10 measures and errors in pipetting technique, precision, and targeting were submitted to principal component analysis to descriptively analyze features and patterns. Fatigue responses and error rates contributed to three principal components (PCs), accounting for 50.9% of total variance. Fatigue responses grouped within the three PCs reflected central and peripheral upper extremity fatigue, postural sway, and changes in oculomotor behavior. In an 8-hr light precision task, error rates shared similar patterns to both physical and cognitive fatigue responses, and/or increases in arousal level. The findings provide insight toward the relationship between fatigue and operation system performance (e.g., errors). This study contributes to a body of literature documenting task errors and fatigue, reflecting physical (both central and peripheral) and cognitive processes.
Sampling design for spatially distributed hydrogeologic and environmental processes
Christakos, G.; Olea, R.A.
1992-01-01
A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
NASA Astrophysics Data System (ADS)
Hinton, Courtney; Punjabi, Alkesh; Ali, Halima
2008-11-01
The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.
NASA Astrophysics Data System (ADS)
Decca, R. S.; Fischbach, E.; Klimchitskaya, G. L.; Krause, D. E.; López, D.; Mostepanenko, V. M.
2003-12-01
We report new constraints on extra-dimensional models and other physics beyond the standard model based on measurements of the Casimir force between two dissimilar metals for separations in the range 0.2 1.2 μm. The Casimir force between a Au-coated sphere and a Cu-coated plate of a microelectromechanical torsional oscillator was measured statically with an absolute error of 0.3 pN. In addition, the Casimir pressure between two parallel plates was determined dynamically with an absolute error of ≈0.6 mPa. Within the limits of experimental and theoretical errors, the results are in agreement with a theory that takes into account the finite conductivity and roughness of the two metals. The level of agreement between experiment and theory was then used to set limits on the predictions of extra-dimensional physics and thermal quantum field theory. It is shown that two theoretical approaches to the thermal Casimir force which predict effects linear in temperature are ruled out by these experiments. Finally, constraints on Yukawa corrections to Newton’s law of gravity are strengthened by more than an order of magnitude in the range 56 330 nm.
Parametric Characterization of TES Detectors Under DC Bias
NASA Technical Reports Server (NTRS)
Chiao, Meng P.; Smith, Stephen James; Kilbourne, Caroline A.; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Ewin, Audrey J.;
2016-01-01
The X-ray integrated field unit (X-IFU) in European Space Agency's (ESA's) Athena mission will be the first high-resolution X-ray spectrometer in space using a large-format transition-edge sensor microcalorimeter array. Motivated by optimization of detector performance for X-IFU, we have conducted an extensive campaign of parametric characterization on transition-edge sensor (TES) detectors with nominal geometries and physical properties in order to establish sensitivity trends relative to magnetic field, dc bias on detectors, operating temperature, and to improve our understanding of detector behavior relative to its fundamental properties such as thermal conductivity, heat capacity, and transition temperature. These results were used for validation of a simple linear detector model in which a small perturbation can be introduced to one or multiple parameters to estimate the error budget for X-IFU. We will show here results of our parametric characterization of TES detectors and briefly discuss the comparison with the TES model.
Validation and Error Characterization for the Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.
2003-01-01
The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration, assumption, or algorithm. The instrumentation and techniques of the Supersites will be discussed. The GPM core satellite, with its dual-frequency radar and conically scanning radiometer, will provide insight into precipitation drop-size distributions and potentially increased measurement capabilities of light rain and snowfall. The ground validation program will include instrumentation and techniques commensurate with these new measurement capabilities.
Improving Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.
2016-10-06
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less
Magnetic control of magnetohydrodynamic instabilities in tokamaks
Strait, Edward J.
2014-11-24
Externally applied, non-axisymmetric magnetic fields form the basis of several relatively simple and direct methods to control magnetohydrodynamic (MHD) instabilities in a tokamak, and most present and planned tokamaks now include a set of non-axisymmetric control coils for application of fields with low toroidal mode numbers. Non-axisymmetric applied fields are routinely used to compensate small asymmetries ( δB/B ~ 10 -3 to 10 -4) of the nominally axisymmetric field, which otherwise can lead to instabilities through braking of plasma rotation and through direct stimulus of tearing modes or kink modes. This compensation may be feedback-controlled, based on the magnetic responsemore » of the plasma to the external fields. Non-axisymmetric fields are used for direct magnetic stabilization of the resistive wall mode — a kink instability with a growth rate slow enough that feedback control is practical. Saturated magnetic islands are also manipulated directly with non-axisymmetric fields, in order to unlock them from the wall and spin them to aid stabilization, or position them for suppression by localized current drive. Several recent scientific advances form the foundation of these developments in the control of instabilities. Most fundamental is the understanding that stable kink modes play a crucial role in the coupling of non-axisymmetric fields to the plasma, determining which field configurations couple most strongly, how the coupling depends on plasma conditions, and whether external asymmetries are amplified by the plasma. A major advance for the physics of high-beta plasmas ( β = plasma pressure/magnetic field pressure) has been the understanding that drift-kinetic resonances can stabilize the resistive wall mode at pressures well above the ideal-MHD stability limit, but also that such discharges can be very sensitive to external asymmetries. The common physics of stable kink modes has brought significant unification to the topics of static error fields at low beta and resistive wall modes at high beta. Furthermore, these and other scientific advances, and their application to control of MHD instabilities, will be reviewed with emphasis on the most recent results and their applicability to ITER.« less
Identification method of laser gyro error model under changing physical field
NASA Astrophysics Data System (ADS)
Wang, Qingqing; Niu, Zhenzhong
2018-04-01
In this paper, the influence mechanism of temperature, temperature changing rate and temperature gradient on the inertial devices is studied. The two-order model of zero bias and the three-order model of the calibration factor of lster gyro under temperature variation are deduced. The calibration scheme of temperature error is designed, and the experiment is carried out. Two methods of stepwise regression analysis and BP neural network are used to identify the parameters of the temperature error model, and the effectiveness of the two methods is proved by the temperature error compensation.
Mistake proofing: changing designs to reduce error
Grout, J R
2006-01-01
Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609
Springer, W H
1996-02-01
An important principle of accounting is that asset inventory needs to be correctly valued to ensure that the financial statements of the institution are accurate. Errors is recording the value of ending inventory in one fiscal year result in errors to published financial statements for that year as well as the subsequent fiscal year. Therefore, it is important that accurate physical counts be periodically taken. It is equally important that any system being used to generate inventory valuation, reordering or management reports be based on consistently accurate on-hand balances. At the foundation of conducting an accurate physical count of an inventory is a comprehensive understanding of the process coupled with a written plan. This article presents a guideline of the physical count processes involved in a traditional double-count approach.
Comparing Zeeman qubits to hyperfine qubits in the context of the surface code: +174Yb and +171Yb
NASA Astrophysics Data System (ADS)
Brown, Natalie C.; Brown, Kenneth R.
2018-05-01
Many systems used for quantum computing possess additional states beyond those defining the qubit. Leakage out of the qubit subspace must be considered when designing quantum error correction codes. Here we consider trapped ion qubits manipulated by Raman transitions. Zeeman qubits do not suffer from leakage errors but are sensitive to magnetic fields to first order. Hyperfine qubits can be encoded in clock states that are insensitive to magnetic fields to first order, but spontaneous scattering during the Raman transition can lead to leakage. Here we compare a Zeeman qubit (+174Yb) to a hyperfine qubit (+171Yb) in the context of the surface code. We find that the number of physical qubits required to reach a specific logical qubit error can be reduced by using +174Yb if the magnetic field can be stabilized with fluctuations smaller than 10 μ G .
HMI Measured Doppler Velocity Contamination from the SDO Orbit Velocity
NASA Astrophysics Data System (ADS)
Scherrer, Phil; HMI Team
2016-10-01
The Problem: The SDO satellite is in an inclined Geo-sync orbit which allows uninterrupted views of the Sun nearly 98% of the time. This orbit has a velocity of about 3,500 m/s with the solar line-of-sight component varying with time of day and time of year. Due to remaining calibration errors in wavelength filters the orbit velocity leaks into the line-of-sight solar velocity and magnetic field measurements. Since the same model of the filter is used in the Milne-Eddington inversions used to generate the vector magnetic field data, the orbit velocity also contaminates the vector magnetic products. These errors contribute 12h and 24h variations in most HMI data products and are known as the 24-hour problem. Early in the mission we made a patch to the calibration that corrected the disk mean velocity. The resulting LOS velocity has been used for helioseismology with no apparent problems. The velocity signal has about a 1% scale error that varies with time of day and with velocity, i.e. it is non-linear for large velocities. This causes leaks into the LOS field (which is simply the difference between velocity measured in LCP and RCP rescaled for the Zeeman splitting). This poster reviews the measurement process, shows examples of the problem, and describes recent work at resolving the issues. Since the errors are in the filter characterization it makes most sense to work first on the LOS data products since they, unlike the vector products, are directly and simply related to the filter profile without assumptions on the solar atmosphere, filling factors, etc. Therefore this poster is strictly limited to understanding how to better understand the filter profiles as they vary across the field and with time of day and time in years resulting in velocity errors of up to a percent and LOS field estimates with errors up to a few percent (of the standard LOS magnetograph method based on measuring the differences in wavelength of the line centroids in LCP and RCP light). We expect that when better filter profiles are available it will be possible to generate improved vector field data products as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Suh, T; Park, S
2015-06-15
Purpose: The dose-related effects of patient setup errors on biophysical indices were evaluated for conventional wedge (CW) and field-in-field (FIF) whole breast irradiation techniques. Methods: The treatment plans for 10 patients receiving whole left breast irradiation were retrospectively selected. Radiobiological and physical effects caused by dose variations were evaluated by shifting the isocenters and gantry angles of the treatment plans. Dose-volume histograms of the planning target volume (PTV), heart, and lungs were generated, and conformity index (CI), homogeneity index (HI), tumor control probability (TCP), and normal tissue complication probability (NTCP) were determined. Results: For “isocenter shift plan” with posterior direction,more » the D95 of the PTV decreased by approximately 15% and the TCP of the PTV decreased by approximately 50% for the FIF technique and by 40% for the CW; however, the NTCPs of the lungs and heart increased by about 13% and 1%, respectively, for both techniques. Increasing the gantry angle decreased the TCPs of the PTV by 24.4% (CW) and by 34% (FIF). The NTCPs for the two techniques differed by only 3%. In case of CW, the CIs and HIs were much higher than that of the FIF in all cases. It had a significant difference between two techniques (p<0.01). According to our results, however, the FIF had more sensitive response by set up errors rather than CW in bio-physical aspects. Conclusions: The radiobiological-based analysis can detect significant dosimetric errors then, can provide a practical patient quality assurance method to guide the radiobiological and physical effects.« less
Test of Understanding of Vectors: A Reliable Multiple-Choice Vector Concept Test
ERIC Educational Resources Information Center
Barniol, Pablo; Zavala, Genaro
2014-01-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…
Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise
NASA Technical Reports Server (NTRS)
Sedlak, J.; Hashmall, J.
1997-01-01
Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.
The GEnes in Myopia (GEM) study in understanding the aetiology of refractive errors.
Baird, Paul N; Schäche, Maria; Dirani, Mohamed
2010-11-01
Refractive errors represent the leading cause of correctable vision impairment and blindness in the world with an estimated 2 billion people affected. Refractive error refers to a group of refractive conditions including hypermetropia, myopia, astigmatism and presbyopia but relatively little is known about their aetiology. In order to explore the potential role of genetic determinants in refractive error the "GEnes in Myopia (GEM) study" was established in 2004. The findings that have resulted from this study have not only provided greater insight into the role of genes and other factors involved in myopia but have also gone some way to uncovering the aetiology of other refractive errors. This review will describe some of the major findings of the GEM study and their relative contribution to the literature, illuminate where the deficiencies are in our understanding of the development of refractive errors and how we will advance this field in the future. Copyright © 2010 Elsevier Ltd. All rights reserved.
The Incorporation and Initialization of Cloud Water/ice in AN Operational Forecast Model
NASA Astrophysics Data System (ADS)
Zhao, Qingyun
Quantitative precipitation forecasts have been one of the weakest aspects of numerical weather prediction models. Theoretical studies show that the errors in precipitation calculation can arise from three sources: errors in the large-scale forecasts of primary variables, errors in the crude treatment of condensation/evaporation and precipitation processes, and errors in the model initial conditions. A new precipitation parameterization scheme has been developed to investigate the forecast value of improved precipitation physics via the introduction of cloud water and cloud ice into a numerical prediction model. The main feature of this scheme is the explicit calculation of cloud water and cloud ice in both the convective and stratiform precipitation parameterization. This scheme has been applied to the eta model at the National Meteorological Center. Four extensive tests have been performed. The statistical results showed a significant improvement in the model precipitation forecasts. Diagnostic studies suggest that the inclusion of cloud ice is important in transferring water vapor to precipitation and in the enhancement of latent heat release; the latter subsequently affects the vertical motion field significantly. Since three-dimensional cloud data is absent from the analysis/assimilation system for most numerical models, a method has been proposed to incorporate observed precipitation and nephanalysis data into the data assimilation system to obtain the initial cloud field for the eta model. In this scheme, the initial moisture and vertical motion fields are also improved at the same time as cloud initialization. The physical initialization is performed in a dynamical initialization framework that uses the Newtonian dynamical relaxation method to nudge the model's wind and mass fields toward analyses during a 12-hour data assimilation period. Results from a case study showed that a realistic cloud field was produced by this method at the end of the data assimilation period. Precipitation forecasts have been significantly improved as a result of the improved initial cloud, moisture and vertical motion fields.
Not Normal: the uncertainties of scientific measurements
NASA Astrophysics Data System (ADS)
Bailey, David C.
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student's t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.
Not Normal: the uncertainties of scientific measurements
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student’s t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply. PMID:28280557
Student understanding of the direction of the magnetic force on a charged particle
NASA Astrophysics Data System (ADS)
Scaife, Thomas M.; Heckler, Andrew F.
2010-08-01
We study student understanding of the direction of the magnetic force experienced by a charged particle moving through a homogeneous magnetic field in both the magnetic pole and field line representations of the magnetic field. In five studies, we administer a series of simple questions in either written or interview format. Our results indicate that although students begin at the same low level of performance in both representations, they answer correctly more often in the field line representation than in the pole representation after instruction. This difference is due in part to more students believing that charges are attracted to magnetic poles than believing that charges are pushed along magnetic field lines. Although traditional instruction is fairly effective in teaching students to answer correctly up to a few weeks following instruction, especially for the field line representation, some students revert to their initial misconceptions several months after instruction. The responses reveal persistent and largely random sign errors in the direction of the force. The sign errors are largely nonsystematic and due to confusion about the direction of the magnetic field and the execution and choice of the right-hand rule and lack of recognition of the noncommutativity of the cross product.
Materials used to simulate physical properties of human skin.
Dąbrowska, A K; Rotaru, G-M; Derler, S; Spano, F; Camenzind, M; Annaheim, S; Stämpfli, R; Schmid, M; Rossi, R M
2016-02-01
For many applications in research, material development and testing, physical skin models are preferable to the use of human skin, because more reliable and reproducible results can be obtained. This article gives an overview of materials applied to model physical properties of human skin to encourage multidisciplinary approaches for more realistic testing and improved understanding of skin-material interactions. The literature databases Web of Science, PubMed and Google Scholar were searched using the terms 'skin model', 'skin phantom', 'skin equivalent', 'synthetic skin', 'skin substitute', 'artificial skin', 'skin replica', and 'skin model substrate.' Articles addressing material developments or measurements that include the replication of skin properties or behaviour were analysed. It was found that the most common materials used to simulate skin are liquid suspensions, gelatinous substances, elastomers, epoxy resins, metals and textiles. Nano- and micro-fillers can be incorporated in the skin models to tune their physical properties. While numerous physical skin models have been reported, most developments are research field-specific and based on trial-and-error methods. As the complexity of advanced measurement techniques increases, new interdisciplinary approaches are needed in future to achieve refined models which realistically simulate multiple properties of human skin. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zulfikar, Aldi; Girsang, Denni Yulius; Saepuzaman, Duden; Samsudin, Achmad
2017-05-01
Conceptual understanding is one of the most important aspects in the study of Physics because of it useful to understand principles behind certain phenomenon which happened. An innovative method was needed to strengthen and enhance student's conceptual understanding, especially regarding the abstract subject such as magnetic field. For this reason, worksheet and exploration sheet based on PDEODE*E (Predict, Discuss, Explain, Observe, Discuss, Explore, and Explain) that uses Gauss Meter application as the smartphone technology has been designed to answer the problem. The magnetic field strength in different mediums is the physics subject which covered in this research. The research was conducted with the aim to know how effective smartphone technology-based PDEODE*E could be implemented as a physics learning strategy. The result of this research shows that students could show improvements in conceptual understanding that shown by the conclusion that was constructed during the learning process. Based on this result, PDEODE*E could become a solution to strengthen students' conceptual understanding regarding physics subject, especially those that requires abstract thinking. This result also has shown that the application ofsmartphone technology could be used to support physics learning processes in the classroom, such as Gauss Meter in this research which used to measure the magnetic field, Light Meter which could be used in the concept of light, and Harmonicity Meter for the context of the sound wave.
Optimizing Introductory Physics for the Life Sciences: Placing Physics in Biological Context
NASA Astrophysics Data System (ADS)
Crouch, Catherine
2014-03-01
Physics is a critical foundation for today's life sciences and medicine. However, the physics content and ways of thinking identified by life scientists as most important for their fields are often not taught, or underemphasized, in traditional introductory physics courses. Furthermore, such courses rarely give students practice using physics to understand living systems in a substantial way. Consequently, students are unlikely to recognize the value of physics to their chosen fields, or to develop facility in applying physics to biological systems. At Swarthmore, as at several other institutions engaged in reforming this course, we have reorganized the introductory course for life science students around touchstone biological examples, in which fundamental physics contributes significantly to understanding biological phenomena or research techniques, in order to make explicit the value of physics to the life sciences. We have also focused on the physics topics and approaches most relevant to biology while seeking to develop rigorous qualitative reasoning and quantitative problem solving skills, using established pedagogical best practices. Each unit is motivated by and culminates with students analyzing one or more touchstone examples. For example, in the second semester we emphasize electric potential and potential difference more than electric field, and start from students' typically superficial understanding of the cell membrane potential and of electrical interactions in biochemistry to help them develop a more sophisticated understanding of electric forces, field, and potential, including in the salt water environment of life. Other second semester touchstones include optics of vision and microscopes, circuit models for neural signaling, and magnetotactic bacteria. When possible, we have adapted existing research-based curricular materials to support these examples. This talk will describe the design and development process for this course, give examples of materials, and present initial assessment data evaluating both content learning and student attitudes.
Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution and Eruption
NASA Astrophysics Data System (ADS)
Leake, J. E.; Linton, M.; Schuck, P. W.
2017-12-01
Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the recent development of coronal models which are "data-driven" at the photosphere. Using magnetohydrodynamic simulations of active region formation and our recently created validation framework we investigate the source of errors in data-driven models that use surface measurements of the magnetic field, and derived MHD quantities, to model the coronal magnetic field. The primary sources of errors in these studies are the temporal and spatial resolution of the surface measurements. We will discuss the implications of theses studies for accurately modeling the build up and release of coronal magnetic energy based on photospheric magnetic field observations.
Combined PEST and Trial-Error approach to improve APEX calibration
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy Environmental eXtender (APEX), a physically-based hydrologic model that simulates management impacts on the environment for small watersheds, requires improved understanding of the input parameters for improved simulations. However, most previously published studies used the ...
NASA Astrophysics Data System (ADS)
Gebregiorgis, A. S.; Peters-Lidard, C. D.; Tian, Y.; Hossain, F.
2011-12-01
Hydrologic modeling has benefited from operational production of high resolution satellite rainfall products. The global coverage, near-real time availability, spatial and temporal sampling resolutions have advanced the application of physically based semi-distributed and distributed hydrologic models for wide range of environmental decision making processes. Despite these successes, the existence of uncertainties due to indirect way of satellite rainfall estimates and hydrologic models themselves remain a challenge in making meaningful and more evocative predictions. This study comprises breaking down of total satellite rainfall error into three independent components (hit bias, missed precipitation and false alarm), characterizing them as function of land use and land cover (LULC), and tracing back the source of simulated soil moisture and runoff error in physically based distributed hydrologic model. Here, we asked "on what way the three independent total bias components, hit bias, missed, and false precipitation, affect the estimation of soil moisture and runoff in physically based hydrologic models?" To understand the clear picture of the outlined question above, we implemented a systematic approach by characterizing and decomposing the total satellite rainfall error as a function of land use and land cover in Mississippi basin. This will help us to understand the major source of soil moisture and runoff errors in hydrologic model simulation and trace back the information to algorithm development and sensor type which ultimately helps to improve algorithms better and will improve application and data assimilation in future for GPM. For forest and woodland and human land use system, the soil moisture was mainly dictated by the total bias for 3B42-RT, CMORPH, and PERSIANN products. On the other side, runoff error was largely dominated by hit bias than the total bias. This difference occurred due to the presence of missed precipitation which is a major contributor to the total bias both during the summer and winter seasons. Missed precipitation, most likely light rain and rain over snow cover, has significant effect on soil moisture and are less capable of producing runoff that results runoff dependency on the hit bias only.
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
ERIC Educational Resources Information Center
Newman, Richard; van der Ventel, Brandon; Hanekom, Crischelle
2017-01-01
Probing university students' understanding of direct-current (DC) resistive circuits is still a field of active physics education research. We report here on a study we conducted of this understanding, where the cohort consisted of students in a large-enrollment first-year physics module. This is a non-calculus based physics module for students in…
Statistical physics of hard combinatorial optimization: Vertex cover problem
NASA Astrophysics Data System (ADS)
Zhao, Jin-Hua; Zhou, Hai-Jun
2014-07-01
Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.
Thomas, D C; Bowman, J D; Jiang, L; Jiang, F; Peters, J M
1999-10-01
Case-control data on childhood leukemia in Los Angeles County were reanalyzed with residential magnetic fields predicted from the wiring configurations of nearby transmission and distribution lines. As described in a companion paper, the 24-h means of the magnetic field's magnitude in subjects' homes were predicted by a physically based regression model that had been fitted to 24-h measurements and wiring data. In addition, magnetic field exposures were adjusted for the most likely form of exposure assessment errors: classic errors for the 24-h measurements and Berkson errors for the predictions from wire configurations. Although the measured fields had no association with childhood leukemia (P for trend=.88), the risks were significant for predicted magnetic fields above 1.25 mG (odds ratio=2.00, 95% confidence interval=1.03-3.89), and a significant dose-response was seen (P for trend=.02). When exposures were determined by a combination of predictions and measurements that corrects for errors, the odds ratio (odd ratio=2.19, 95% confidence interval=1.12-4.31) and the trend (p =.007) showed somewhat greater significance. These findings support the hypothesis that magnetic fields from electrical lines are causally related to childhood leukemia but that this association has been inconsistent among epidemiologic studies due to different types of exposure assessment error. In these data, the leukemia risks from a child's residential magnetic field exposure appears to be better assessed by wire configurations than by 24-h area measurements. However, the predicted fields only partially account for the effect of the Wertheimer-Leeper wire code in a multivariate analysis and do not completely explain why these wire codes have been so often associated with childhood leukemia. The most plausible explanation for our findings is that the causal factor is another magnetic field exposure metric correlated to both wire code and the field's time-averaged magnitude. Copyright 1999 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Hills, Laura A.
2006-01-01
This paper draws on data from a year-long ethnographic study of a group of 12- to 13-year-old girls that explored the processes through which they negotiated gendered physicality within the context of physical education. Bourdieu's concepts of habitus and social fields and McNay's extension of his work underpin a discussion of three contexts where…
Thirty Years of Improving the NCEP Global Forecast System
NASA Astrophysics Data System (ADS)
White, G. H.; Manikin, G.; Yang, F.
2014-12-01
Current eight day forecasts by the NCEP Global Forecast System are as accurate as five day forecasts 30 years ago. This revolution in weather forecasting reflects increases in computer power, improvements in the assimilation of observations, especially satellite data, improvements in model physics, improvements in observations and international cooperation and competition. One important component has been and is the diagnosis, evaluation and reduction of systematic errors. The effect of proposed improvements in the GFS on systematic errors is one component of the thorough testing of such improvements by the Global Climate and Weather Modeling Branch. Examples of reductions in systematic errors in zonal mean temperatures and winds and other fields will be presented. One challenge in evaluating systematic errors is uncertainty in what reality is. Model initial states can be regarded as the best overall depiction of the atmosphere, but can be misleading in areas of few observations or for fields not well observed such as humidity or precipitation over the oceans. Verification of model physics is particularly difficult. The Environmental Modeling Center emphasizes the evaluation of systematic biases against observations. Recently EMC has placed greater emphasis on synoptic evaluation and on precipitation, 2-meter temperatures and dew points and 10 meter winds. A weekly EMC map discussion reviews the performance of many models over the United States and has helped diagnose and alleviate significant systematic errors in the GFS, including a near surface summertime evening cold wet bias over the eastern US and a multi-week period when the GFS persistently developed bogus tropical storms off Central America. The GFS exhibits a wet bias for light rain and a dry bias for moderate to heavy rain over the continental United States. Significant changes to the GFS are scheduled to be implemented in the fall of 2014. These include higher resolution, improved physics and improvements to the assimilation. These changes significantly improve the tropospheric flow and reduce a tropical upper tropospheric warm bias. One important error remaining is the failure of the GFS to maintain deep convection over Indonesia and in the tropical west Pacific. This and other current systematic errors will be presented.
The GEOS Ozone Data Assimilation System: Specification of Error Statistics
NASA Technical Reports Server (NTRS)
Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.
2000-01-01
A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.
Stability and Control of Burning Tokamak Plasmas with Resistive Walls: Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, George; Brennan, Dylan; Cole, Andrew
This project is focused on theoretical and computational development for quantitative prediction of the stability and control of the equilibrium state evolution in toroidal burning plasmas, including its interaction with the surrounding resistive wall. The stability of long pulse burning plasmas is highly sensitive to the physics of resonant layers in the plasma, sources of momentum and flow, kinetic effects of energetic particles, and boundary conditions at the wall, including feedback control and error fields. In ITER in particular, the low toroidal flow equilibrium state, sustained primarily by energetic alpha particles from fusion reactions, will require the consideration of allmore » of these key elements to predict quantitatively the stability and evolution. The principal investigators on this project have performed theoretical and computational analyses, guided by analytic modeling, to address this physics in realistic configurations. The overall goal has been to understand the key physics mechanisms that describe stable toroidal burning plasmas under active feedback control. Several relevant achievements have occurred during this project, leading to publications and invited conference presentations. In theoretical efforts, with the physics of the resonant layers, resistive wall, and toroidal momentum transport included, this study has extended from cylindrical resistive plasma - resistive wall models with feedback control to toroidal geometry with strong shaping to study mode coupling effects on the stability. These results have given insight into combined tearing and resistive wall mode behavior in simulations and experiment, while enabling a rapid exploration of plasma parameter space, to identify possible domains of interest for large plasma codes to investigate in more detail. Resonant field amplification and quasilinear torques in the presence of error fields and velocity shear have also been investigated. Here it was found, surprisingly, that the Maxwell torque on resonant layers in the plasma which exhibit finite real frequencies ωr in their response is significantly different from the conventional results based on tearing layers with pure real growth (or damping) rates. This observation suggests the possibility that the torque on the tearing layers can lock the plasma rotation to this finite phase velocity, which may lead to locking in which velocity shear is maintained. More broadly, the sources of all torques driving flows in magnetic confinement experiments is not fully understood, and this theoretical work may shed light on puzzling experimental results. It was also found that real frequencies occur over a wide range of plasma response regimes, and are indeed the norm and not the exception, often leading to profound effects on the locking torque. Also, the influence of trapped energetic ions orbiting over the resistive plasma mode structure, a critical effect in burning plasmas, was investigated through analytic modeling and analysis of simulations and experiment. This effort has shown that energetic ions can drive the development of disruptive instabilities, but also damp and stabilize the instabilities, depending on the details of the shear in the equilibrium magnetic field. This finding could be critical to maintaining stable operations in burning plasmas. In the most recent work, a series of simulations have been conducted to study the effect of differential flow and energetic ion effects on entry to the onset of a disruptive instability in the most realistic conditions possible, with preexisting nonlinearly saturated benign instabilities. Throughout this work, the linear and quasilinear theory of resonant layers with differential flow between them, their interaction with resistive wall and error fields, and energetic ions effects, have been used to understand realistic simulations of mode onset and the experimental discharges they represent. These studies will continue to answer remaining questions about the relation between theoretical results obtained in this project and observations of the onset and evolution of disruptive instabilities in experiment.« less
Towards Holography via Quantum Source-Channel Codes.
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-14
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Towards Holography via Quantum Source-Channel Codes
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, X. M.; Zhang, M.; Su, J. T., E-mail: xmzhang@nao.cas.cn
It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such asmore » limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.« less
NASA Astrophysics Data System (ADS)
Abe, M.; Prasannaa, V. S.; Das, B. P.
2018-03-01
Heavy polar diatomic molecules are currently among the most promising probes of fundamental physics. Constraining the electric dipole moment of the electron (e EDM ), in order to explore physics beyond the standard model, requires a synergy of molecular experiment and theory. Recent advances in experiment in this field have motivated us to implement a finite-field coupled-cluster (FFCC) approach. This work has distinct advantages over the theoretical methods that we had used earlier in the analysis of e EDM searches. We used relativistic FFCC to calculate molecular properties of interest to e EDM experiments, that is, the effective electric field (Eeff) and the permanent electric dipole moment (PDM). We theoretically determine these quantities for the alkaline-earth monofluorides (AEMs), the mercury monohalides (Hg X ), and PbF. The latter two systems, as well as BaF from the AEMs, are of interest to e EDM searches. We also report the calculation of the properties using a relativistic finite-field coupled-cluster approach with single, double, and partial triples' excitations, which is considered to be the gold standard of electronic structure calculations. We also present a detailed error estimate, including errors that stem from our choice of basis sets, and higher-order correlation effects.
Force Analysis and Energy Operation of Chaotic System of Permanent-Magnet Synchronous Motor
NASA Astrophysics Data System (ADS)
Qi, Guoyuan; Hu, Jianbing
2017-12-01
The disadvantage of a nondimensionalized model of a permanent-magnet synchronous Motor (PMSM) is identified. The original PMSM model is transformed into a Kolmogorov system to aid dynamic force analysis. The vector field of the PMSM is analogous to the force field including four types of torque — inertial, internal, dissipative, and generalized external. Using the feedback thought, the error torque between external torque and dissipative torque is identified. The pitchfork bifurcation of the PMSM is performed. Four forms of energy are identified for the system — kinetic, potential, dissipative, and supplied. The physical interpretations of the decomposition of force and energy exchange are given. Casimir energy is stored energy, and its rate of change is the error power between the dissipative energy and the energy supplied to the motor. Error torque and error power influence the different types of dynamic modes. The Hamiltonian energy and Casimir energy are compared to find the function of each in producing the dynamic modes. A supremum bound for the chaotic attractor is proposed using the error power and Lagrange multiplier.
NASA Astrophysics Data System (ADS)
Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu
2018-04-01
The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.
Teaching physics and understanding infrared thermal imaging
NASA Astrophysics Data System (ADS)
Vollmer, Michael; Möllmann, Klaus-Peter
2017-08-01
Infrared thermal imaging is a very rapidly evolving field. The latest trends are small smartphone IR camera accessories, making infrared imaging a widespread and well-known consumer product. Applications range from medical diagnosis methods via building inspections and industrial predictive maintenance etc. also to visualization in the natural sciences. Infrared cameras do allow qualitative imaging and visualization but also quantitative measurements of the surface temperatures of objects. On the one hand, they are a particularly suitable tool to teach optics and radiation physics and many selected topics in different fields of physics, on the other hand there is an increasing need of engineers and physicists who understand these complex state of the art photonics systems. Therefore students must also learn and understand the physics underlying these systems.
Density scaling on n = 1 error field penetration in ohmically heated discharges in EAST
NASA Astrophysics Data System (ADS)
Wang, Hui-Hui; Sun, You-Wen; Shi, Tong-Hui; Zang, Qing; Liu, Yue-Qiang; Yang, Xu; Gu, Shuai; He, Kai-Yang; Gu, Xiang; Qian, Jin-Ping; Shen, Biao; Luo, Zheng-Ping; Chu, Nan; Jia, Man-Ni; Sheng, Zhi-Cai; Liu, Hai-Qing; Gong, Xian-Zu; Wan, Bao-Nian; Contributors, EAST
2018-05-01
Density scaling of error field penetration in EAST is investigated with different n = 1 magnetic perturbation coil configurations in ohmically heated discharges. The density scalings of error field penetration thresholds under two magnetic perturbation spectra are br\\propto n_e0.5 and br\\propto n_e0.6 , where b r is the error field and n e is the line averaged electron density. One difficulty in understanding the density scaling is that key parameters other than density in determining the field penetration process may also be changed when the plasma density changes. Therefore, they should be determined from experiments. The estimated theoretical analysis (br\\propto n_e0.54 in lower density region and br\\propto n_e0.40 in higher density region), using the density dependence of viscosity diffusion time, electron temperature and mode frequency measured from the experiments, is consistent with the observed scaling. One of the key points to reproduce the observed scaling in EAST is that the viscosity diffusion time estimated from energy confinement time is almost constant. It means that the plasma confinement lies in saturation ohmic confinement regime rather than the linear Neo-Alcator regime causing weak density dependence in the previous theoretical studies.
NASA Astrophysics Data System (ADS)
Lang, K. A.; Petrie, G.
2014-12-01
Extended field-based summer courses provide an invaluable field experience for undergraduate majors in the geosciences. These courses often utilize the construction of geological maps and structural cross sections as the primary pedagogical tool to teach basic map orientation, rock identification and structural interpretation. However, advances in the usability and ubiquity of Geographic Information Systems in these courses presents new opportunities to evaluate student work. In particular, computer-based quantification of systematic mapping errors elucidates the factors influencing student success in the field. We present a case example from a mapping exercise conducted in a summer Field Geology course at a popular field location near Dillon, Montana. We use a computer algorithm to automatically compare the placement and attribution of unit contacts with spatial variables including topographic slope, aspect, bedding attitude, ground cover and distance from starting location. We compliment analyses with anecdotal and survey data that suggest both physical factors (e.g. steep topographic slope) as well as structural nuance (e.g. low angle bedding) may dominate student frustration, particularly in courses with a high student to instructor ratio. We propose mechanisms to improve student experience by allowing students to practice skills with orientation games and broadening student background with tangential lessons (e.g. on colluvial transport processes). As well, we suggest low-cost ways to decrease the student to instructor ratio by supporting returning undergraduates from previous years or staging mapping over smaller areas. Future applications of this analysis might include a rapid and objective system for evaluation of student maps (including point-data, such as attitude measurements) and quantification of temporal trends in student work as class sizes, pedagogical approaches or environmental variables change. Long-term goals include understanding and characterizing stochasticity in geological mapping beyond the undergraduate classroom, and better quantifying uncertainty in published map products.
Tuning a climate model using nudging to reanalysis.
NASA Astrophysics Data System (ADS)
Cheedela, S. K.; Mapes, B. E.
2014-12-01
Tuning a atmospheric general circulation model involves a daunting task of adjusting non-observable parameters to adjust the mean climate. These parameters arise from necessity to describe unresolved flow through parametrizations. Tuning a climate model is often done with certain set of priorities, such as global mean temperature, net top of the atmosphere radiation. These priorities are hard enough to reach let alone reducing systematic biases in the models. The goal of currently study is to explore alternate ways to tune a climate model to reduce some systematic biases that can be used in synergy with existing efforts. Nudging a climate model to a known state is a poor man's inverse of tuning process described above. Our approach involves nudging the atmospheric model to state of art reanalysis fields thereby providing a balanced state with respect to the global mean temperature and winds. The tendencies derived from nudging are negative of errors from physical parametrizations as the errors from dynamical core would be small. Patterns of nudging are compared to the patterns of different physical parametrizations to decipher the cause for certain biases in relation to tuning parameters. This approach might also help in understanding certain compensating errors that arise from tuning process. ECHAM6 is a comprehensive general model, also used in recent Coupled Model Intercomparision Project(CMIP5). The approach used to tune it and effect of certain parameters that effect its mean climate are reported clearly, hence it serves as a benchmark for our approach. Our planned experiments include nudging ECHAM6 atmospheric model to European Center Reanalysis (ERA-Interim) and reanalysis from National Center for Environmental Prediction (NCEP) and decipher choice of certain parameters that lead to systematic biases in its simulations. Of particular interest are reducing long standing biases related to simulation of Asian summer monsoon.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
Factors Controlling Sediment Load in The Central Anatolia Region of Turkey: Ankara River Basin.
Duru, Umit; Wohl, Ellen; Ahmadi, Mehdi
2017-05-01
Better understanding of the factors controlling sediment load at a catchment scale can facilitate estimation of soil erosion and sediment transport rates. The research summarized here enhances understanding of correlations between potential control variables on suspended sediment loads. The Soil and Water Assessment Tool was used to simulate flow and sediment at the Ankara River basin. Multivariable regression analysis and principal component analysis were then performed between sediment load and controlling variables. The physical variables were either directly derived from a Digital Elevation Model or from field maps or computed using established equations. Mean observed sediment rate is 6697 ton/year and mean sediment yield is 21 ton/y/km² from the gage. Soil and Water Assessment Tool satisfactorily simulated observed sediment load with Nash-Sutcliffe efficiency, relative error, and coefficient of determination (R²) values of 0.81, -1.55, and 0.93, respectively in the catchment. Therefore, parameter values from the physically based model were applied to the multivariable regression analysis as well as principal component analysis. The results indicate that stream flow, drainage area, and channel width explain most of the variability in sediment load among the catchments. The implications of the results, efficient siltation management practices in the catchment should be performed to stream flow, drainage area, and channel width.
Factors Controlling Sediment Load in The Central Anatolia Region of Turkey: Ankara River Basin
NASA Astrophysics Data System (ADS)
Duru, Umit; Wohl, Ellen; Ahmadi, Mehdi
2017-05-01
Better understanding of the factors controlling sediment load at a catchment scale can facilitate estimation of soil erosion and sediment transport rates. The research summarized here enhances understanding of correlations between potential control variables on suspended sediment loads. The Soil and Water Assessment Tool was used to simulate flow and sediment at the Ankara River basin. Multivariable regression analysis and principal component analysis were then performed between sediment load and controlling variables. The physical variables were either directly derived from a Digital Elevation Model or from field maps or computed using established equations. Mean observed sediment rate is 6697 ton/year and mean sediment yield is 21 ton/y/km² from the gage. Soil and Water Assessment Tool satisfactorily simulated observed sediment load with Nash-Sutcliffe efficiency, relative error, and coefficient of determination ( R²) values of 0.81, -1.55, and 0.93, respectively in the catchment. Therefore, parameter values from the physically based model were applied to the multivariable regression analysis as well as principal component analysis. The results indicate that stream flow, drainage area, and channel width explain most of the variability in sediment load among the catchments. The implications of the results, efficient siltation management practices in the catchment should be performed to stream flow, drainage area, and channel width.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punjabi, Alkesh; Ali, Halima
2011-02-15
Any canonical transformation of Hamiltonian equations is symplectic, and any area-preserving transformation in 2D is a symplectomorphism. Based on these, a discrete symplectic map and its continuous symplectic analog are derived for forward magnetic field line trajectories in natural canonical coordinates. The unperturbed axisymmetric Hamiltonian for magnetic field lines is constructed from the experimental data in the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The equilibrium Hamiltonian is a highly accurate, analytic, and realistic representation of the magnetic geometry of the DIII-D. These symplectic mathematical maps are used to calculate the magnetic footprint onmore » the inboard collector plate in the DIII-D. Internal statistical topological noise and field errors are irreducible and ubiquitous in magnetic confinement schemes for fusion. It is important to know the stochasticity and magnetic footprint from noise and error fields. The estimates of the spectrum and mode amplitudes of the spatial topological noise and magnetic errors in the DIII-D are used as magnetic perturbation. The discrete and continuous symplectic maps are used to calculate the magnetic footprint on the inboard collector plate of the DIII-D by inverting the natural coordinates to physical coordinates. The combination of highly accurate equilibrium generating function, natural canonical coordinates, symplecticity, and small step-size together gives a very accurate calculation of magnetic footprint. Radial variation of magnetic perturbation and the response of plasma to perturbation are not included. The inboard footprint from noise and errors are dominated by m=3, n=1 mode. The footprint is in the form of a toroidally winding helical strip. The width of stochastic layer scales as (1/2) power of amplitude. The area of footprint scales as first power of amplitude. The physical parameters such as toroidal angle, length, and poloidal angle covered before striking, and the safety factor all have fractal structure. The average field diffusion near the X-point for lines that strike and that do not strike differs by about three to four orders of magnitude. The magnetic footprint gives the maximal bounds on size and heat flux density on collector plate.« less
Effects Of Local Oscillator Errors On Digital Beamforming
2016-03-01
processor EF element factor EW electronic warfare FFM flicker frequency modulation FOV field-of-view FPGA field-programmable gate array FPM flicker...frequencies and also more difficult to measure [15]. 2. Flicker frequency modulation The source for flicker frequency modulation ( FFM ) is attributed to...a physical resonance mechanism of an oscillator or issues controlling electronic components. Some oscillators might not show FFM noise, which might
Arrows as anchors: An analysis of the material features of electric field vector arrows
NASA Astrophysics Data System (ADS)
Gire, Elizabeth; Price, Edward
2014-12-01
Representations in physics possess both physical and conceptual aspects that are fundamentally intertwined and can interact to support or hinder sense making and computation. We use distributed cognition and the theory of conceptual blending with material anchors to interpret the roles of conceptual and material features of representations in students' use of representations for computation. We focus on the vector-arrows representation of electric fields and describe this representation as a conceptual blend of electric field concepts, physical space, and the material features of the representation (i.e., the physical writing and the surface upon which it is drawn). In this representation, spatial extent (e.g., distance on paper) is used to represent both distances in coordinate space and magnitudes of electric field vectors. In conceptual blending theory, this conflation is described as a clash between the input spaces in the blend. We explore the benefits and drawbacks of this clash, as well as other features of this representation. This analysis is illustrated with examples from clinical problem-solving interviews with upper-division physics majors. We see that while these intermediate physics students make a variety of errors using this representation, they also use the geometric features of the representation to add electric field contributions and to organize the problem situation productively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.
Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less
Evaluation of a new model of aeolian transport in the presence of vegetation
Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Miller, Mark E.; Vest, Kimberly; Draut, Amy E.
2013-01-01
Aeolian transport is an important characteristic of many arid and semiarid regions worldwide that affects dust emission and ecosystem processes. The purpose of this paper is to evaluate a recent model of aeolian transport in the presence of vegetation. This approach differs from previous models by accounting for how vegetation affects the distribution of shear velocity on the surface rather than merely calculating the average effect of vegetation on surface shear velocity or simply using empirical relationships. Vegetation, soil, and meteorological data at 65 field sites with measurements of horizontal aeolian flux were collected from the Western United States. Measured fluxes were tested against modeled values to evaluate model performance, to obtain a set of optimum model parameters, and to estimate the uncertainty in these parameters. The same field data were used to model horizontal aeolian flux using three other schemes. Our results show that the model can predict horizontal aeolian flux with an approximate relative error of 2.1 and that further empirical corrections can reduce the approximate relative error to 1.0. The level of error is within what would be expected given uncertainties in threshold shear velocity and wind speed at our sites. The model outperforms the alternative schemes both in terms of approximate relative error and the number of sites at which threshold shear velocity was exceeded. These results lend support to an understanding of the physics of aeolian transport in which (1) vegetation's impact on transport is dependent upon the distribution of vegetation rather than merely its average lateral cover and (2) vegetation impacts surface shear stress locally by depressing it in the immediate lee of plants rather than by changing the bulk surface's threshold shear velocity. Our results also suggest that threshold shear velocity is exceeded more than might be estimated by single measurements of threshold shear stress and roughness length commonly associated with vegetated surfaces, highlighting the variation of threshold shear velocity with space and time in real landscapes.
NASA Technical Reports Server (NTRS)
Sun, Jielun
1993-01-01
Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.
Understanding and manipulating the RF fields at high field MRI
Ibrahim, Tamer S.; Hue, YiK-Kiong; Tang, Lin
2015-01-01
This paper presents a complete overview of the electromagnetics (radiofrequency aspect) of MRI at low and high fields. Using analytical formulations, numerical modeling (computational electromagnetics), and ultrahigh field imaging experiments, the physics that impacts the electromagnetic quantities associated with MRI, namely (1) the transmit field, (2) receive field, and (3) total electromagnetic power absorption, is analyzed. The physical interpretation of the above-mentioned quantities is investigated by electromagnetic theory, to understand ‘What happens, in terms of electromagnetics, when operating at different static field strengths?’ Using experimental studies and numerical simulations, this paper also examines the physical and technological feasibilities by which all or any of these specified electromagnetic quantities can be manipulated through techniques such as B1 shimming (phased array excitation) and signal combination using a receive array in order to advance MRI at high field strengths. Pertinent to this subject and with highly coupled coils operating at 7 T, this paper also presents the first phantom work on B1 shimming without B1 measurements. PMID:19621335
Physics and Control of Locked Modes in the DIII-D Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volpe, Francesco
This Final Technical Report summarizes an investigation, carried out under the auspices of the DOE Early Career Award, of the physics and control of non-rotating magnetic islands (“locked modes”) in tokamak plasmas. Locked modes are one of the main causes of disruptions in present tokamaks, and could be an even bigger concern in ITER, due to its relatively high beta (favoring the formation of Neoclassical Tearing Mode islands) and low rotation (favoring locking). For these reasons, this research had the goal of studying and learning how to control locked modes in the DIII-D National Fusion Facility under ITER-relevant conditions ofmore » high pressure and low rotation. Major results included: the first full suppression of locked modes and avoidance of the associated disruptions; the demonstration of error field detection from the interaction between locked modes, applied rotating fields and intrinsic errors; the analysis of a vast database of disruptive locked modes, which led to criteria for disruption prediction and avoidance.« less
McClure, Kimberley A; McGuire, Katherine L; Chapan, Denis M
2018-05-07
Policy on officer-involved shootings is critically reviewed and errors in applying scientific knowledge identified. Identifying and evaluating the most relevant science to a field-based problem is challenging. Law enforcement administrators with a clear understanding of valid science and application are in a better position to utilize scientific knowledge for the benefit of their organizations and officers. A recommended framework is proposed for considering the validity of science and its application. Valid science emerges via hypothesis testing, replication, extension and marked by peer review, known error rates, and general acceptance in its field of origin. Valid application of behavioral science requires an understanding of the methodology employed, measures used, and participants recruited to determine whether the science is ready for application. Fostering a science-practitioner partnership and an organizational culture that embraces quality, empirically based policy, and practices improves science-to-practice translation. © 2018 American Academy of Forensic Sciences.
NASA Technical Reports Server (NTRS)
Chiu, Y. T.; Hilton, H. H.
1977-01-01
Exact closed-form solutions to the solar force-free magnetic-field boundary-value problem are obtained for constant alpha in Cartesian geometry by a Green's function approach. The uniqueness of the physical problem is discussed. Application of the exact results to practical solar magnetic-field calculations is free of series truncation errors and is at least as economical as the approximate methods currently in use. Results of some test cases are presented.
Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices
NASA Astrophysics Data System (ADS)
Ma, Bao-Feng; Jiang, Hong-Gang
2018-06-01
Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.
Optical guidance vidicon test program
NASA Technical Reports Server (NTRS)
Eiseman, A. R.; Stanton, R. H.; Voge, C. C.
1976-01-01
A laboratory and field test program was conducted to quantify the optical navigation parameters of the Mariner vidicons. A scene simulator and a camera were designed and built for vidicon tests under a wide variety of conditions. Laboratory tests characterized error sources important to the optical navigation process and field tests verified star sensitivity and characterized comet optical guidance parameters. The equipment, tests and data reduction techniques used are described. Key test results are listed. A substantial increase in the understanding of the use of selenium vidicons as detectors for spacecraft optical guidance was achieved, indicating a reduction in residual offset errors by a factor of two to four to the single pixel level.
Severns, Paul M.
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190
Breed, Greg A; Severns, Paul M
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.
NFIRAOS in 2015: engineering for future integration of complex subsystems
NASA Astrophysics Data System (ADS)
Atwood, Jenny; Andersen, David; Byrnes, Peter; Densmore, Adam; Fitzsimmons, Joeleff; Herriot, Glen; Hill, Alexis
2016-07-01
The Narrow Field InfraRed Adaptive Optics System (NFIRAOS) will be the first-light facility Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). NFIRAOS will be able to host three science instruments that can take advantage of this high performance system. NRC Herzberg is leading the design effort for this critical TMT subsystem. As part of the final design phase of NFIRAOS, we have identified multiple subsystems to be sub-contracted to Canadian industry. The scope of work for each subcontract is guided by the NFIRAOS Work Breakdown Structure (WBS) and is divided into two phases: the completion of the final design and the fabrication, assembly and delivery of the final product. Integration of the subsystems at NRC will require a detailed understanding of the interfaces between the subsystems, and this work has begun by defining the interface physical characteristics, stability, local coordinate systems, and alignment features. In order to maintain our stringent performance requirements, the interface parameters for each subsystem are captured in multiple performance budgets, which allow a bottom-up error estimate. In this paper we discuss our approach for defining the interfaces in a consistent manner and present an example error budget that is influenced by multiple subsystems.
NASA Astrophysics Data System (ADS)
Schneider, C. A.; Aggett, G. R.; Hattendorf, M. J.
2007-12-01
Better information on evapotranspiration (ET) is essential to better understanding of consumptive use of water by crops. RTi is using NASA Earth-sun System research results and METRIC (Mapping ET at high Resolution with Internalized Calibration) to increase the repeatability and accuracy of consumptive use estimates. METRIC, an image-processing model for calculating ET as a residual of the surface energy balance, utilizes the thermal band on various satellite remote sensors. Calculating actual ET from satellites can avoid many of the assumptions driving other methods of calculating ET over a large area. Because it is physically based and does not rely on explicit knowledge of crop type in the field, a large potential source of error should be eliminated. This paper assesses sources of error in current operational estimates of ET for an area of the South Platte irrigated lands of Colorado, and benchmarks potential improvements in the accuracy of ET estimates gained using METRIC, as well as the processing efficiency of consumptive use demand for large irrigated lands. Examples highlighting how better water planning decisions and water management can be achieved via enhanced monitoring of the temporal and spatial relationships between water demand and water availability are provided.
Physical and Mathematical Questions on Signal Processing in Multibase Phase Direction Finders
NASA Astrophysics Data System (ADS)
Denisov, V. P.; Dubinin, D. V.; Meshcheryakov, A. A.
2018-02-01
Questions on improving the accuracy of multiple-base phase direction finders by rejecting anomalously large errors in the process of resolving the measurement ambiguities are considered. A physical basis is derived and calculated relationships characterizing the efficiency of the proposed solutions are obtained. Results of a computer simulation of a three-base direction finder are analyzed, along with field measurements of a three-base direction finder along near-ground paths.
Information systems as a tool to improve legal metrology activities
NASA Astrophysics Data System (ADS)
Rodrigues Filho, B. A.; Soratto, A. N. R.; Gonçalves, R. F.
2016-07-01
This study explores the importance of information systems applied to legal metrology as a tool to improve the control of measuring instruments used in trade. The information system implanted in Brazil has also helped to understand and appraise the control of the measurements due to the behavior of the errors and deviations of instruments used in trade, allowing the allocation of resources wisely, leading to a more effective planning and control on the legal metrology field. A study case analyzing the fuel sector is carried out in order to show the conformity of fuel dispersers according to maximum permissible errors. The statistics of measurement errors of 167,310 fuel dispensers of gasoline, ethanol and diesel used in the field were analyzed demonstrating the accordance of the fuel market in Brazil to the legal requirements.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Analysis of key technologies in geomagnetic navigation
NASA Astrophysics Data System (ADS)
Zhang, Xiaoming; Zhao, Yan
2008-10-01
Because of the costly price and the error accumulation of high precise Inertial Navigation Systems (INS) and the vulnerability of Global Navigation Satellite Systems (GNSS), the geomagnetic navigation technology, a passive autonomous navigation method, is paid attention again. Geomagnetic field is a natural spatial physical field, and is a function of position and time in near earth space. The navigation technology based on geomagnetic field is researched in a wide range of commercial and military applications. This paper presents the main features and the state-of-the-art of Geomagnetic Navigation System (GMNS). Geomagnetic field models and reference maps are described. Obtaining, modeling and updating accurate Anomaly Magnetic Field information is an important step for high precision geomagnetic navigation. In addition, the errors of geomagnetic measurement using strapdown magnetometers are analyzed. The precise geomagnetic data is obtained by means of magnetometer calibration and vehicle magnetic field compensation. According to the measurement data and reference map or model of geomagnetic field, the vehicle's position and attitude can be obtained using matching algorithm or state-estimating method. The tendency of geomagnetic navigation in near future is introduced at the end of this paper.
How Students Combine Resources to Build Understanding of Complex Topics
ERIC Educational Resources Information Center
Richards, Alan J.
2013-01-01
The field of Physics Education Research (PER) seeks to investigate how students learn physics and how instructors can help students learn more effectively. The process by which learners create understanding about a complex physics concept is an active area of research. My study explores this process, using solar cells as the context. To understand…
Leptonic-decay-constant ratio f(K+)/f(π+) from lattice QCD with physical light quarks.
Bazavov, A; Bernard, C; DeTar, C; Foley, J; Freeman, W; Gottlieb, Steven; Heller, U M; Hetrick, J E; Kim, J; Laiho, J; Levkova, L; Lightman, M; Osborn, J; Qiu, S; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-04-26
A calculation of the ratio of leptonic decay constants f(K+)/f(π+) makes possible a precise determination of the ratio of Cabibbo-Kobayashi-Maskawa (CKM) matrix elements |V(us)|/|V(ud)| in the standard model, and places a stringent constraint on the scale of new physics that would lead to deviations from unitarity in the first row of the CKM matrix. We compute f(K+)/f(π+) numerically in unquenched lattice QCD using gauge-field ensembles recently generated that include four flavors of dynamical quarks: up, down, strange, and charm. We analyze data at four lattice spacings a ≈ 0.06, 0.09, 0.12, and 0.15 fm with simulated pion masses down to the physical value 135 MeV. We obtain f(K+)/f(π+) = 1.1947(26)(37), where the errors are statistical and total systematic, respectively. This is our first physics result from our N(f) = 2+1+1 ensembles, and the first calculation of f(K+)/f(π+) from lattice-QCD simulations at the physical point. Our result is the most precise lattice-QCD determination of f(K+)/f(π+), with an error comparable to the current world average. When combined with experimental measurements of the leptonic branching fractions, it leads to a precise determination of |V(us)|/|V(ud)| = 0.2309(9)(4) where the errors are theoretical and experimental, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuangrod, T; Simpson, J; Greer, P
Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less
Applications of Augmented Reality-Based Natural Interactive Learning in Magnetic Field Instruction
ERIC Educational Resources Information Center
Cai, Su; Chiang, Feng-Kuang; Sun, Yuchen; Lin, Chenglong; Lee, Joey J.
2017-01-01
Educators must address several challenges inherent to the instruction of scientific disciplines such as physics -- expensive or insufficient laboratory equipment, equipment error, difficulty in simulating certain experimental conditions. Augmented reality (AR) can be a promising approach to address these challenges. In this paper, we discuss the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
VO2sim 0.1: Using Simulation to Understand Measurement Error in Indirect Calorimetry
2015-08-01
illness. The Army has recognized the importance of understanding oxygen consumption in the field and is developing models to aid in operational decision...acclimatize to high altitude (Amann et al. 2013) and hypoxia (Self et al. 2013). The Army has recognized the importance of understanding oxygen consumption ...minimum detectable change using the K4b2: oxygen consumption , gait efficiency, and heart rate for healthy adults during submaximal walking. Res Q Exerc
Near Identifiability of Dynamical Systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.
NASA Astrophysics Data System (ADS)
Gehlot, Bharat K.; Koopmans, Léon V. E.
2018-05-01
Contamination due to foregrounds, calibration errors and ionospheric effects pose major challenges in detection of the cosmic 21 cm signal in various Epoch of Reionization (EoR) experiments. We present the results of a study of a field centered on 3C196 using LOFAR Low Band observations, where we quantify various wide field and calibration effects such as gain errors, polarized foregrounds, and ionospheric effects. We observe a `pitchfork' structure in the power spectrum of the polarized intensity in delay-baseline space, which leaks into the modes beyond the instrumental horizon. We show that this structure arises due to strong instrumental polarization leakage (~30%) towards Cas A which is far away from primary field of view. We measure a small ionospheric diffractive scale towards CasA resembling pure Kolmogorov turbulence. Our work provides insights in understanding the nature of aforementioned effects and mitigating them in future Cosmic Dawn observations.
Criteria for the use of regression analysis for remote sensing of sediment and pollutants
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.
1982-01-01
An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.
Impact of Processing Method on Recovery of Bacteria from Wipes Used in Biological Surface Sampling
Olson, Nathan D.; Filliben, James J.; Morrow, Jayne B.
2012-01-01
Environmental sampling for microbiological contaminants is a key component of hygiene monitoring and risk characterization practices utilized across diverse fields of application. However, confidence in surface sampling results, both in the field and in controlled laboratory studies, has been undermined by large variation in sampling performance results. Sources of variation include controlled parameters, such as sampling materials and processing methods, which often differ among studies, as well as random and systematic errors; however, the relative contributions of these factors remain unclear. The objective of this study was to determine the relative impacts of sample processing methods, including extraction solution and physical dissociation method (vortexing and sonication), on recovery of Gram-positive (Bacillus cereus) and Gram-negative (Burkholderia thailandensis and Escherichia coli) bacteria from directly inoculated wipes. This work showed that target organism had the largest impact on extraction efficiency and recovery precision, as measured by traditional colony counts. The physical dissociation method (PDM) had negligible impact, while the effect of the extraction solution was organism dependent. Overall, however, extraction of organisms from wipes using phosphate-buffered saline with 0.04% Tween 80 (PBST) resulted in the highest mean recovery across all three organisms. The results from this study contribute to a better understanding of the factors that influence sampling performance, which is critical to the development of efficient and reliable sampling methodologies relevant to public health and biodefense. PMID:22706055
Heuristics and Cognitive Error in Medical Imaging.
Itri, Jason N; Patel, Sohil H
2018-05-01
The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.
The DIAGNOSER project: combining assessment and learning.
Thissen-Roe, Anne; Hunt, Earl; Minstrell, Jim
2004-05-01
DIAGNOSER is an Internet-based tool for classroom instruction. It delivers continuous formative assessment and feedback to high school physics students and their teachers about the correct and incorrect concepts and ideas the students may hold regarding physical situations. That is, it diagnoses misconceptions that underlie wrong answers of students, such as a confusion of velocity with acceleration. We use data about patterns of student responses, particularly consistency of errors from question to question, to improve the system's understanding of student concepts.
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Harmonic field in knotted space
NASA Astrophysics Data System (ADS)
Duan, Xiuqing; Yao, Zhenwei
2018-04-01
Knotted fields enrich a variety of physical phenomena, ranging from fluid flows, electromagnetic fields, to textures of ordered media. Maxwell's electrostatic equations, whose vacuum solution is mathematically known as a harmonic field, provide an ideal setting to explore the role of domain topology in determining physical fields in confined space. In this work, we show the uniqueness of a harmonic field in knotted tubes, and reduce the construction of a harmonic field to a Neumann boundary value problem. By analyzing the harmonic field in typical knotted tubes, we identify the torsion driven transition from bipolar to vortex patterns. We also analogously extend our discussion to the organization of liquid crystal textures in knotted tubes. These results further our understanding about the general role of topology in shaping a physical field in confined space, and may find applications in the control of physical fields by manipulation of surface topology.
Bringing Earth Magnetism Research into the High School Physics Classroom
NASA Astrophysics Data System (ADS)
Smirnov, A. V.; Bluth, G.; Engel, E.; Kurpier, K.; Foucher, M. S.; Anderson, K. L.
2015-12-01
We present our work in progress from an NSF CAREER project that aims to integrate paleomagnetic research and secondary school physics education. The research project is aimed at quantifying the strength and geometry of the Precambrian geomagnetic field. Investigation of the geomagnetic field behavior is crucial for understanding the mechanisms of field generation, and the development of the Earth's atmosphere and biosphere, and can serve as a focus for connecting high-level Earth science research with a standard physics curriculum. High school science teachers have participated in each summer field and research component of the project, gaining field and laboratory research experience, sets of rock and mineral samples, and classroom-tested laboratory magnetism activities for secondary school physics and earth science courses. We report on three field seasons of teacher field experiences and two years of classroom testing of paleomagnetic research materials merged into physics instruction on magnetism. Students were surveyed before and after dedicated instruction for both perceptions and attitude towards earth science in general, then more specifically on earth history and earth magnetism. Students were also surveyed before and after instruction on major earth system and magnetic concepts and processes, particularly as they relate to paleomagnetic research. Most students surveyed had a strongly positive viewpoint towards the study of Earth history and the importance of studying Earth Sciences in general, but were significantly less drawn towards more specific topics such as mineralogy and magnetism. Students demonstrated understanding of Earth model and the basics of magnetism, as well as the general timing of life, atmospheric development, and magnetic field development. However, detailed knowledge such as the magnetic dynamo, how the magnetic field has changed over time, and connections between earth magnetism and the development of an atmosphere remained largely misunderstood even after specific instruction, laboratory activities, and research examples. Ongoing work is examining the effectiveness of specific classroom and laboratory activities on student perceptions and misconceptions - which models work best to develop deeper understanding and appreciation of paleomagnetic research.
A Science Strategy for Space Physics
NASA Technical Reports Server (NTRS)
1995-01-01
This report by the Committee on Solar and Space Physics and the Committee on Solar-Terrestrial Research recommends the major directions for scientific research in space physics for the coming decade. As a field of science, space physics has passed through the stage of simply looking to see what is out beyond Earth's atmosphere. It has become a 'hard' science, focusing on understanding the fundamental interactions between charged particles, electromagnetic fields, and gases in the natural laboratory consisting of the galaxy, the Sun, the heliosphere, and planetary magnetospheres, ionospheres, and upper atmospheres. The motivation for space physics research goes far beyond basic physics and intellectual curiosity, however, because long-term variations in the brightness of the Sun virtually affect the habitability of the Earth, while sudden rearrangements of magnetic fields above the solar surface can have profound effects on the delicate balance of the forces that shape our environment in space and on the human technology that is sensitive to that balance. The several subfields of space physics share the following objectives: to understand the fundamental laws or processes of nature as they apply to space plasmas and rarefied gases both on the microscale and in the larger complex systems that constitute the domain of space physics; to understand the links between changes in the Sun and the resulting effects at the Earth, with the eventual goal of predicting the significant effects on the terrestrial environment; and to continue the exploration and description of the plasmas and rarefied gases in the solar system.
Elkady, Ahmed M; Sun, Hongfu; Wilman, Alan H
2016-05-01
Quantitative Susceptibility Mapping (QSM) is an emerging area of brain research with clear application to brain iron studies in deep gray matter. However, acquisition of standard whole brain QSM can be time-consuming. One means to reduce scan time is to use a focal acquisition restricted only to the regions of interest such as deep gray matter. However, the non-local dipole field necessary for QSM reconstruction extends far beyond the structure of interest. We demonstrate the practical implications of these non-local fields on the choice of brain volume for QSM. In an illustrative numerical simulation and then in human brain experiments, we examine the effect on QSM of volume reduction in each dimension. For the globus pallidus, as an example of iron-rich deep gray matter, we demonstrate that substantial errors can arise even when the field-of-view far exceeds the physical structural boundaries. Thus, QSM reconstruction requires a non-local field-of-view prescription to ensure minimal errors. An axial QSM acquisition, centered on the globus pallidus, should encompass at least 76mm in the superior-inferior direction to conserve susceptibility values from the globus pallidus. This dimension exceeds the physical coronal extent of this structure by at least five-fold. As QSM sees wider use in the neuroscience community, its unique requirement for an extended field-of-view needs to be considered. Copyright © 2016 Elsevier Inc. All rights reserved.
Introduction of a pyramid guiding process for general musculoskeletal physical rehabilitation.
Stark, Timothy W
2006-06-08
Successful instruction of a complicated subject as Physical Rehabilitation demands organization. To understand principles and processes of such a field demands a hierarchy of steps to achieve the intended outcome. This paper is intended to be an introduction to a proposed pyramid scheme of general physical rehabilitation principles. The purpose of the pyramid scheme is to allow for a greater understanding for the student and patient. As the respected Food Guide Pyramid accomplishes, the student will further appreciate and apply supported physical rehabilitation principles and the patient will understand that there is a progressive method to their functional healing process.
Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.
Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z
2012-07-01
Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.
Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling
Cordell, Lindrith
1994-01-01
Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.
Teaching Physics as a Service Subject.
ERIC Educational Resources Information Center
Lowe, T. L.; Hayes, M.
1986-01-01
Discusses the need for physics to be taught to individuals in a wide variety of areas. Argues that the understanding of physics concepts enhances other fields. Proposes various ways to integrate physics into other programs. Gives examples of incorporating physics into speech therapy, environmental health and medical technology programs. (TW)
The Importance of Physical Literacy for Physical Education and Recreation
ERIC Educational Resources Information Center
Basoglu, Umut Davut
2018-01-01
As the basis of characteristics, qualifications, behaviors, awareness, knowledge and understanding of the development of healthy active living and physical recreation opportunities Physical Literacy (PL); has become a global concern in the fields of physical education and recreation since its first use as a term. Experts from different countries…
Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw
2005-02-01
This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the room's natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.
Reed-Solomon Codes and the Deep Hole Problem
NASA Astrophysics Data System (ADS)
Keti, Matt
In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.
Report of the Odyssey FPGA Independent Assessment Team
NASA Technical Reports Server (NTRS)
Mayer, Donald C.; Katz, Richard B.; Osborn, Jon V.; Soden, Jerry M.; Barto, R.; Day, John H. (Technical Monitor)
2001-01-01
An independent assessment team (IAT) was formed and met on April 2, 2001, at Lockheed Martin in Denver, Colorado, to aid in understanding a technical issue for the Mars Odyssey spacecraft scheduled for launch on April 7, 2001. An RP1280A field-programmable gate array (FPGA) from a lot of parts common to the SIRTF, Odyssey, and Genesis missions had failed on a SIRTF printed circuit board. A second FPGA from an earlier Odyssey circuit board was also known to have failed and was also included in the analysis by the IAT. Observations indicated an abnormally high failure rate for flight RP1280A devices (the first flight lot produced using this flow) at Lockheed Martin and the causes of these failures were not determined. Standard failure analysis techniques were applied to these parts, however, additional diagnostic techniques unique for devices of this class were not used, and the parts were prematurely submitted to a destructive physical analysis, making a determination of the root cause of failure difficult. Any of several potential failure scenarios may have caused these failures, including electrostatic discharge, electrical overstress, manufacturing defects, board design errors, board manufacturing errors, FPGA design errors, or programmer errors. Several of these mechanisms would have relatively benign consequences for disposition of the parts currently installed on boards in the Odyssey spacecraft if established as the root cause of failure. However, other potential failure mechanisms could have more dire consequences. As there is no simple way to determine the likely failure mechanisms with reasonable confidence before Odyssey launch, it is not possible for the IAT to recommend a disposition for the other parts on boards in the Odyssey spacecraft based on sound engineering principles.
Cross-Grade Comparison of Students' Conceptual Understanding with Lenses in Geometric Optics
ERIC Educational Resources Information Center
Tural, G.
2015-01-01
Students commonly find the field of physics difficult. Therefore, they generally have learning problems. One of the subjects with which they have difficulties is optics within a physics discipline. This study aims to determine students' conceptual understanding levels at different education levels relating to lenses in geometric optics. A…
Ideas for a pattern-oriented approach towards a VERA analysis ensemble
NASA Astrophysics Data System (ADS)
Gorgas, T.; Dorninger, M.
2010-09-01
Ideas for a pattern-oriented approach towards a VERA analysis ensemble For many applications in meteorology and especially for verification purposes it is important to have some information about the uncertainties of observation and analysis data. A high quality of these "reference data" is an absolute necessity as the uncertainties are reflected in verification measures. The VERA (Vienna Enhanced Resolution Analysis) scheme includes a sophisticated quality control tool which accounts for the correction of observational data and provides an estimation of the observation uncertainty. It is crucial for meteorologically and physically reliable analysis fields. VERA is based on a variational principle and does not need any first guess fields. It is therefore NWP model independent and can also be used as an unbiased reference for real time model verification. For downscaling purposes VERA uses an a priori knowledge on small-scale physical processes over complex terrain, the so called "fingerprint technique", which transfers information from rich to data sparse regions. The enhanced Joint D-PHASE and COPS data set forms the data base for the analysis ensemble study. For the WWRP projects D-PHASE and COPS a joint activity has been started to collect GTS and non-GTS data from the national and regional meteorological services in Central Europe for 2007. Data from more than 11.000 stations are available for high resolution analyses. The usage of random numbers as perturbations for ensemble experiments is a common approach in meteorology. In most implementations, like for NWP-model ensemble systems, the focus lies on error growth and propagation on the spatial and temporal scale. When defining errors in analysis fields we have to consider the fact that analyses are not time dependent and that no perturbation method aimed at temporal evolution is possible. Further, the method applied should respect two major sources of analysis errors: Observation errors AND analysis or interpolation errors. With the concept of an analysis ensemble we hope to get a more detailed sight on both sources of analysis errors. For the computation of the VERA ensemble members a sample of Gaussian random perturbations is produced for each station and parameter. The deviation of perturbations is based on the correction proposals by the VERA QC scheme to provide some "natural" limits for the ensemble. In order to put more emphasis on the weather situation we aim to integrate the main synoptic field structures as weighting factors for the perturbations. Two widely approved approaches are used for the definition of these main field structures: The Principal Component Analysis and a 2D-Discrete Wavelet Transform. The results of tests concerning the implementation of this pattern-supported analysis ensemble system and a comparison of the different approaches are given in the presentation.
NASA Astrophysics Data System (ADS)
Cartlidge, Edwin
2017-01-01
Some scientists claim they can control genetically engineered neurons using magnetic fields. Have they and the high-profile journals that published their research failed to understand basic physics? Edwin Cartlidge investigates
The Computer in Second Semester Introductory Physics.
ERIC Educational Resources Information Center
Merrill, John R.
This supplementary text material is meant to suggest ways in which the computer can increase students' intuitive understanding of fields and waves. The first way allows the student to produce a number of examples of the physics discussed in the text. For example, more complicated field and potential maps, or intensity patterns, can be drawn from…
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Kocoulin, Valerii; Nefediev, Yurii
2016-07-01
In the Kazan University computer simulation is carried out for observation of lunar physical libration in projects planned installation of measuring equipment on the lunar surface. One such project is the project of ILOM (Japan), in which on the lunar pole an optical telescope with CCD will be equipped. As a result, the determining the selenographic coordinates (x and y) of a star with an accuracy of 1 ms of arc will be achieved. On the basis of the analytical theory of physical libration we developed a technique for solving the inverse problem of the libration. And we have already shown, for example, that the error in determining selenographic coordinates about ɛ seconds does not lead to errors in the determination of the libration angles ρ and Iσ larger than the 1.414ɛ. Libration in longitude is not determined from observations of the polar star (Petrova et al., 2012). The accuracy of the libration in the inverse problem depends on accuracy of the coordinates of the stars - α and δ - taken from the star catalogs. Checking this influence is the task of the present study. To do simulation we have developed that allows to choose the stars, falling in the field of view of the lunar telescope on observation period. Equatorial coordinates of stars were chosen by us from several fundamental catalogs: UCAC2-BSS, Hipparcos, Tycho, FK6 (part I, III) and the Astronomical Almanac. An analysis of these catalogues from the point of view accuracy of coordinates of stars represented in them was performed by Nefediev et al., 2013. The largest error, 20-70 ms, found in the catalogues UCAC2 and Tycho, the others have an error about a millisecond of arc. We simulated the observations with mentioned errors and got the following results. 1. The error in the declination Δδ of the star causes the same order error in libration parameters ρ and Iσ , while the sensitivity of libration to errors in Δα is ten time smaller. Fortunately, due to statistics (30 to 70, depending on the time of observation), this error is reduced by an order, i.e. does not exceed the error of observation selenographic coordinates. 2. The worst thing - errors in coordinates of catalogue causes though a small but constant shift in the ρ and Iσ. So, when Δα, Δδ ˜0.01", then the shift reaches 0.0025". Moreover there is a trend, with a slight, but noticeable slope. 3. Effect of error in declination of a stars is substantially strong than the error in right ascension. Perhaps it is characteristic only for polar observations. For the required accuracy in determination of the physical libration these phenomena must be taken into account when processing the planned observations. Referencies. Nefediev et al., 2013. Uchenye zapiski Kazanskogo universiteta, v. 155, 1, p.188-194. Petrova, N., Abdulmyanov T., Hanada H. Some qualitative manifestations of the physical libration of the Moon by observing stars from the lunar surface. //J. Adv. Space Res., 2012a. V. 50, p. 1702-1711
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna
2013-05-01
Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Cirrus cloud retrieval from MSG/SEVIRI during day and night using artificial neural networks
NASA Astrophysics Data System (ADS)
Strandgren, Johan; Bugliaro, Luca
2017-04-01
By covering a large part of the Earth, cirrus clouds play an important role in climate as they reflect incoming solar radiation and absorb outgoing thermal radiation. Nevertheless, the cirrus clouds remain one of the largest uncertainties in atmospheric research and the understanding of the physical processes that govern their life cycle is still poorly understood, as is their representation in climate models. To monitor and better understand the properties and physical processes of cirrus clouds, it's essential that those tenuous clouds can be observed from geostationary spaceborne imagers like SEVIRI (Spinning Enhanced Visible and InfraRed Imager), that possess a high temporal resolution together with a large field of view and play an important role besides in-situ observations for the investigation of cirrus cloud processes. CiPS (Cirrus Properties from Seviri) is a new algorithm targeting thin cirrus clouds. CiPS is an artificial neural network trained with coincident SEVIRI and CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) observations in order to retrieve a cirrus cloud mask along with the cloud top height (CTH), ice optical thickness (IOT) and ice water path (IWP) from SEVIRI. By utilizing only the thermal/IR channels of SEVIRI, CiPS can be used during day and night making it a powerful tool for the cirrus life cycle analysis. Despite the great challenge of detecting thin cirrus clouds and retrieving their properties from a geostationary imager using only the thermal/IR wavelengths, CiPS performs well. Among the cirrus clouds detected by CALIOP, CiPS detects 70 and 95 % of the clouds with an optical thickness of 0.1 and 1.0 respectively. Among the cirrus free pixels, CiPS classify 96 % correctly. For the CTH retrieval, CiPS has a mean absolute percentage error of 10 % or less with respect to CALIOP for cirrus clouds with a CTH greater than 8 km. For the IOT retrieval, CiPS has a mean absolute percentage error of 100 % or less with respect to CALIOP for cirrus clouds with an optical thickness down to 0.07. For such thin cirrus clouds an error of 100 % should be regarded as low from a geostationary imager like SEVIRI. The IWP retrieved by CiPS shows a similar performance, but has larger deviations for the thinner cirrus clouds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, M. P.; Centre for Quantum Technologies, National University of Singapore; QuTech, Delft University of Technology, Lorentzweg 1, 2611 CJ Delft
2016-02-15
Instances of discrete quantum systems coupled to a continuum of oscillators are ubiquitous in physics. Often the continua are approximated by a discrete set of modes. We derive error bounds on expectation values of system observables that have been time evolved under such discretised Hamiltonians. These bounds take on the form of a function of time and the number of discrete modes, where the discrete modes are chosen according to Gauss quadrature rules. The derivation makes use of tools from the field of Lieb-Robinson bounds and the theory of orthonormal polynomials.
NASA Astrophysics Data System (ADS)
Qi, Di
Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.
Review of inductively coupled plasmas: Nano-applications and bistable hysteresis physics
NASA Astrophysics Data System (ADS)
Lee, Hyo-Chang
2018-03-01
Many different gas discharges and plasmas exhibit bistable states under a given set of conditions, and the history-dependent hysteresis that is manifested by intensive quantities of the system upon variation of an external parameter has been observed in inductively coupled plasmas (ICPs). When the external parameters (such as discharge powers) increase, the plasma density increases suddenly from a low- to high-density mode, whereas decreasing the power maintains the plasma in a relatively high-density mode, resulting in significant hysteresis. To date, a comprehensive description of plasma hysteresis and a physical understanding of the main mechanism underlying their bistability remain elusive, despite many experimental observations of plasma bistability conducted under radio-frequency ICP excitation. This fundamental understanding of mode transitions and hysteresis is essential and highly important in various applied fields owing to the widespread use of ICPs, such as semiconductor/display/solar-cell processing (etching, deposition, and ashing), wireless light lamp, nanostructure fabrication, nuclear-fusion operation, spacecraft propulsion, gas reformation, and the removal of hazardous gases and materials. If, in such applications, plasma undergoes a mode transition and hysteresis occurs in response to external perturbations, the process result will be strongly affected. Due to these reasons, this paper comprehensively reviews both the current knowledge in the context of the various applied fields and the global understanding of the bistability and hysteresis physics in the ICPs. At first, the basic understanding of the ICP is given. After that, applications of ICPs to various applied fields of nano/environmental/energy-science are introduced. Finally, the mode transition and hysteresis in ICPs are studied in detail. This study will show the fundamental understanding of hysteresis physics in plasmas and give open possibilities for applications to various applied fields to find novel control knob and optimizing processing conditions.
NASA Astrophysics Data System (ADS)
Cid, Ximena; Lopez, Ramon
2011-10-01
It is well known that student have difficulties with concepts in physics and space science as well as other STEM fields. Some of these difficulties may be rooted in student conceptual errors, whereas other difficulties may arise from issues with visual cognition and spatial intelligence. It has also been suggested that some aspects of high attrition rates from STEM fields can be attributed to students' visual spatial abilities. We will be presenting data collected from introductory courses in the College of Engineering, Department of Physics, Department of Chemistry, and the Department of Mathematics at the University of Texas at Arlington. These data examine the relationship between students' visual spatial abilities and comprehension in the subject matter. Where correlations are found to exist, visual spatial interventions can be implemented to reduce the attrition rates.
Students' difficulties with vector calculus in electrodynamics
NASA Astrophysics Data System (ADS)
Bollen, Laurens; van Kampen, Paul; De Cock, Mieke
2015-12-01
Understanding Maxwell's equations in differential form is of great importance when studying the electrodynamic phenomena discussed in advanced electromagnetism courses. It is therefore necessary that students master the use of vector calculus in physical situations. In this light we investigated the difficulties second year students at KU Leuven encounter with the divergence and curl of a vector field in mathematical and physical contexts. We have found that they are quite skilled at doing calculations, but struggle with interpreting graphical representations of vector fields and applying vector calculus to physical situations. We have found strong indications that traditional instruction is not sufficient for our students to fully understand the meaning and power of Maxwell's equations in electrodynamics.
Regularized Semiparametric Estimation for Ordinary Differential Equations
Li, Yun; Zhu, Ji; Wang, Naisyin
2015-01-01
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639
City of Physics--Analogies to Increase Cognitive Coherence in Physics Learning
ERIC Educational Resources Information Center
Tabor-Morris, A. E.; Froriep, K. A.; Briles, T. M.; McGuire, C. M.
2009-01-01
Physics educators and researchers can be concerned with how students attain cognitive coherence: specifically, how students understand and intra-connect the whole of their knowledge of the "field of physics". Starting instead with the metaphor "city of physics", the implication of applying architectural concepts for the human acquisition of mental…
On the Construction and Dynamics of Knotted Fields
NASA Astrophysics Data System (ADS)
Kedia, Hridesh
Representing a physical field in terms of its field lines has often enabled a deeper understanding of complex physical phenomena, from Faraday's law of magnetic induction, to the Helmholtz laws of vortex motion, to the free energy density of liquid crystals in terms of the distortions of the lines of the director field. At the same time, the application of ideas from topology--the study of properties that are invariant under continuous deformations--has led to robust insights into the nature of complex physical systems from defects in crystal structures, to the earth's magnetic field, to topological conservation laws. The study of knotted fields, physical fields in which the field lines encode knots, emerges naturally from the application of topological ideas to the investigation of the physical phenomena best understood in terms of the lines of a field. A knot--a closed loop tangled with itself which can not be untangled without cutting the loop--is the simplest topologically non-trivial object constructed from a line. Remarkably, knots in the vortex (magnetic field) lines of a dissipationless fluid (plasma), persist forever as they are transported by the flow, stretching and rotating as they evolve. Moreover, deeply entwined with the topology-preserving dynamics of dissipationless fluids and plasmas, is an additional conserved quantity--helicity, a measure of the average linking of the vortex (magnetic field) lines in a fluid (plasma)--which has had far-reaching consequences for fluids and plasmas. Inspired by the persistence of knots in dissipationless flows, and their far-reaching physical consequences, we seek to understand the interplay between the dynamics of a field and the topology of its field lines in a variety of systems. While it is easy to tie a knot in a shoelace, tying a knot in the the lines of a space-filling field requires contorting the lines everywhere to match the knotted region. The challenge of analytically constructing knotted field configurations has impeded a deeper understanding of the interplay between topology and dynamics in fluids and plasmas. We begin by analytically constructing knotted field configurations which encode a desired knot in the lines of the field, and show that their helicity can be tuned independently of the encoded knot. The nonlinear nature of the physical systems in which these knotted field configurations arise, makes their analytical study challenging. We ask if a linear theory such as electromagnetism can allow knotted field configurations to persist with time. We find analytical expressions for an infinite family of knotted solutions to Maxwell's equations in vacuum and elucidate their connections to dissipationless flows. We present a design rule for constructing such persistently knotted electromagnetic fields, which could possibly be used to transfer knottedness to matter such as quantum fluids and plasmas. An important consequence of the persistence of knots in classical dissipationless flows is the existence of an additional conserved quantity, helicity, which has had far-reaching implications. To understand the existence of analogous conserved quantities, we ask if superfluids, which flow without dissipation just like classical dissipationless flows, have an additional conserved quantity akin to helicity. We address this question using an analytical approach based on defining the particle relabeling symmetry--the symmetry underlying helicity conservation--in superfluids, and find that an analogous conserved quantity exists but vanishes identically owing to the intrinsic geometry of complex scalar fields. Furthermore, to address the question of a ``classical limit'' of superfluid vortices which recovers classical helicity conservation, we perform numerical simulations of \\emph{bundles} of superfluid vortices, and find behavior akin to classical viscous flows.
Coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.
A next generation multiscale view of inborn errors of metabolism
Argmann, Carmen A.; Houten, Sander M.; Zhu, Jun; Schadt, Eric E.
2015-01-01
Inborn errors of metabolism (IEM) are not unlike common diseases. They often present as a spectrum of disease phenotypes that correlates poorly with the severity of the disease-causing mutations. This greatly impacts patient care and reveals fundamental gaps in our knowledge of disease modifying biology. Systems biology approaches that integrate multi-omics data into molecular networks have significantly improved our understanding of complex diseases. Similar approaches to study IEM are rare despite their complex nature. We highlight that existing common disease-derived datasets and networks can be repurposed to generate novel mechanistic insight in IEM and potentially identify candidate modifiers. While understanding disease pathophysiology will advance the IEM field, the ultimate goal should be to understand per individual how their phenotype emerges given their primary mutation on the background of their whole genome, not unlike personalized medicine. We foresee that panomics and network strategies combined with recent experimental innovations will facilitate this. PMID:26712461
NASA Astrophysics Data System (ADS)
Arribas, Enrique; Escobar, Isabel; Suarez, Carmen P.; Najera, Alberto; Beléndez, Augusto
2015-11-01
In this work, we propose an inexpensive laboratory practice for an introductory physics course laboratory for any grade of science and engineering study. This practice was very well received by our students, where a smartphone (iOS, Android, or Windows) is used together with mini magnets (similar to those used on refrigerator doors), a 20 cm long school rule, a paper, and a free application (app) that needs to be downloaded and installed that measures magnetic fields using the smartphone’s magnetic field sensor or magnetometer. The apps we have used are: Magnetometer (iOS), Magnetometer Metal Detector, and Physics Toolbox Magnetometer (Android). Nothing else is needed. Cost of this practice: free. The main purpose of the practice is that students determine the dependence of the component x of the magnetic field produced by different magnets (including ring magnets and sphere magnets). We obtained that the dependency of the magnetic field with the distance is of the form x-3, in total agreement with the theoretical analysis. The secondary objective is to apply the technique of least squares fit to obtain this exponent and the magnetic moment of the magnets, with the corresponding absolute error.
Critical error fields for locked mode instability in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Haye, R.J.; Fitzpatrick, R.; Hender, T.C.
1992-07-01
Otherwise stable discharges can become nonlinearly unstable to disruptive locked modes when subjected to a resonant {ital m}=2, {ital n}=1 error field from irregular poloidal field coils, as in DIII-D (Nucl. Fusion {bold 31}, 875 (1991)), or from resonant magnetic perturbation coils as in COMPASS-C ({ital Proceedings} {ital of} {ital the} 18{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Physics}, Berlin (EPS, Petit-Lancy, Switzerland, 1991), Vol. 15C, Part II, p. 61). Experiments in Ohmically heated deuterium discharges with {ital q}{approx}3.5, {ital {bar n}} {approx} 2 {times} 10{sup 19} m{sup {minus}3} andmore » {ital B}{sub {ital T}} {approx} 1.2 T show that a much larger relative error field ({ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 1 {times} 10{sup {minus}3}) is required to produce a locked mode in the small, rapidly rotating plasma of COMPASS-C ({ital R}{sub 0} = 0.56 m, {ital f}{approx}13 kHz) than in the medium-sized plasmas of DIII-D ({ital R}{sub 0} = 1.67 m, {ital f}{approx}1.6 kHz), where the critical relative error field is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}4}. This dependence of the threshold for instability is explained by a nonlinear tearing theory of the interaction of resonant magnetic perturbations with rotating plasmas that predicts the critical error field scales as ({ital fR}{sub 0}/{ital B}{sub {ital T}}){sup 4/3}{ital {bar n}}{sup 2/3}. Extrapolating from existing devices, the predicted critical field for locked modes in Ohmic discharges on the International Thermonuclear Experimental Reactor (ITER) (Nucl. Fusion {bold 30}, 1183 (1990)) ({ital f}=0.17 kHz, {ital R}{sub 0} = 6.0 m, {ital B}{sub {ital T}} = 4.9 T, {ital {bar n}} = 2 {times} 10{sup 19} m{sup {minus}3}) is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}5}.« less
Introduction of a pyramid guiding process for general musculoskeletal physical rehabilitation
Stark, Timothy W
2006-01-01
Successful instruction of a complicated subject as Physical Rehabilitation demands organization. To understand principles and processes of such a field demands a hierarchy of steps to achieve the intended outcome. This paper is intended to be an introduction to a proposed pyramid scheme of general physical rehabilitation principles. The purpose of the pyramid scheme is to allow for a greater understanding for the student and patient. As the respected Food Guide Pyramid accomplishes, the student will further appreciate and apply supported physical rehabilitation principles and the patient will understand that there is a progressive method to their functional healing process. PMID:16759396
LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W
2008-01-01
Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less
Krawczyk, María C; Fernández, Rodrigo S; Pedreira, María E; Boccia, Mariano M
2017-07-01
Experimental psychology defines Prediction Error (PE) as a mismatch between expected and current events. It represents a unifier concept within the memory field, as it is the driving force of memory acquisition and updating. Prediction error induces updating of consolidated memories in strength or content by memory reconsolidation. This process has two different neurobiological phases, which involves the destabilization (labilization) of a consolidated memory followed by its restabilization. The aim of this work is to emphasize the functional role of PE on the neurobiology of learning and memory, integrating and discussing different research areas: behavioral, neurobiological, computational and clinical psychiatry. Copyright © 2016 Elsevier Inc. All rights reserved.
Relativistic Transverse Gravitational Redshift
NASA Astrophysics Data System (ADS)
Mayer, A. F.
2012-12-01
The parametrized post-Newtonian (PPN) formalism is a tool for quantitative analysis of the weak gravitational field based on the field equations of general relativity. This formalism and its ten parameters provide the practical theoretical foundation for the evaluation of empirical data produced by space-based missions designed to map and better understand the gravitational field (e.g., GRAIL, GRACE, GOCE). Accordingly, mission data is interpreted in the context of the canonical PPN formalism; unexpected, anomalous data are explained as similarly unexpected but apparently real physical phenomena, which may be characterized as ``gravitational anomalies," or by various sources contributing to the total error budget. Another possibility, which is typically not considered, is a small modeling error in canonical general relativity. The concept of the idealized point-mass spherical equipotential surface, which originates with Newton's law of gravity, is preserved in Einstein's synthesis of special relativity with accelerated reference frames in the form of the field equations. It was not previously realized that the fundamental principles of relativity invalidate this concept and with it the idea that the gravitational field is conservative (i.e., zero net work is done on any closed path). The ideal radial free fall of a material body from arbitrarily-large range to a point on such an equipotential surface (S) determines a unique escape-velocity vector of magnitude v collinear to the acceleration vector of magnitude g at this point. For two such points on S separated by angle dφ , the Equivalence Principle implies distinct reference frames experiencing inertial acceleration of identical magnitude g in different directions in space. The complete equivalence of these inertially-accelerated frames to their analogous frames at rest on S requires evaluation at instantaneous velocity v relative to a local inertial observer. Because these velocity vectors are not parallel, a symmetric energy potential exists between the frames that is quantified by the instantaneous Δ {v} = v\\cdot{d}φ between them; in order for either frame to become indistinguishable from the other, such that their respective velocity and acceleration vectors are parallel, a change in velocity is required. While the qualitative features of general relativity imply this phenomenon (i.e., a symmetric potential difference between two points on a Newtonian `equipotential surface' that is similar to a friction effect), it is not predicted by the field equations due to a modeling error concerning time. This is an error of omission; time has fundamental geometric properties implied by the principles of relativity that are not reflected in the field equations. Where b is the radius and g is the gravitational acceleration characterizing a spherical geoid S of an ideal point-source gravitational field, an elegant derivation that rests on first principles shows that for two points at rest on S separated by a distance d << b, a symmetric relativistic redshift exists between these points of magnitude z = gd2/bc^2, which over 1 km at Earth sea level yields z ˜{10-17}. It can be tested with a variety of methods, in particular laser interferometry. A more sophisticated derivation yields a considerably more complex predictive formula for any two points in a gravitational field.
A Pilot Study Teaching Metrology in an Introductory Statistics Course
ERIC Educational Resources Information Center
Casleton, Emily; Beyler, Amy; Genschel, Ulrike; Wilson, Alyson
2014-01-01
Undergraduate students who have just completed an introductory statistics course often lack deep understanding of variability and enthusiasm for the field of statistics. This paper argues that by introducing the commonly underemphasized concept of measurement error, students will have a better chance of attaining both. We further present lecture…
Numerical characterization of plasma breakdown in reversed field pinches
NASA Astrophysics Data System (ADS)
Peng, Yanli; Zhang, Ya; Mao, Wenzhe; Yang, Zhoujun; Hu, Xiwei; Jiang, Wei
2018-02-01
In the reversed field pinch, there is considerable interest in investigating the plasma breakdown. Indeed, the plasma formed during the breakdown may have an influence on the confinement and maintenance in the latter process. However, up to now there has been no related work, experimentally or in simulation, regarding plasma breakdown in reversed field pinch (RFP). In order to figure out the physical mechanism behind plasma breakdown, the effects of the toroidal and error magnetic field, as well as the loop voltage have been studied. We find that the error magnetic field cannot be neglected even though it is quite small in the short plasma breakdown phase. As the toroidal magnetic field increases, the averaged electron energy is reduced after plasma breakdown is complete, which is disadvantageous for the latter process. In addition, unlike the voltage limits in the tokamak, loop voltages can be quite high because there are no requirements for superconductivity. Volt-second consumption has a small difference under different loop voltages. The breakdown delay still exists in various loop voltage cases, but it is much shorter compared to that in the tokamak case. In all, successful breakdowns are possible in the RFP under a fairly broad range of parameters.
Pediatric Nurses' Perceptions of Medication Safety and Medication Error: A Mixed Methods Study.
Alomari, Albara; Wilson, Val; Solman, Annette; Bajorek, Beata; Tinsley, Patricia
2018-06-01
This study aims to outline the current workplace culture of medication practice in a pediatric medical ward. The objective is to explore the perceptions of nurses in a pediatric clinical setting as to why medication administration errors occur. As nurses have a central role in the medication process, it is essential to explore nurses' perceptions of the factors influencing the medication process. Without this understanding, it is difficult to develop effective prevention strategies aimed at reducing medication administration errors. Previous studies were limited to exploring a single and specific aspect of medication safety. The methods used in these studies were limited to survey designs which may lead to incomplete or inadequate information being provided. This study is phase 1 on an action research project. Data collection included a direct observation of nurses during medication preparation and administration, audit based on the medication policy, and guidelines and focus groups with nursing staff. A thematic analysis was undertaken by each author independently to analyze the observation notes and focus group transcripts. Simple descriptive statistics were used to analyze the audit data. The study was conducted in a specialized pediatric medical ward. Four key themes were identified from the combined quantitative and qualitative data: (1) understanding medication errors, (2) the busy-ness of nurses, (3) the physical environment, and (4) compliance with medication policy and practice guidelines. Workload, frequent interruptions to process, poor physical environment design, lack of preparation space, and impractical medication policies are identified as barriers to safe medication practice. Overcoming these barriers requires organizations to review medication process policies and engage nurses more in medication safety research and in designing clinical guidelines for their own practice.
Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S
2013-06-01
Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.
Aliasing errors in measurements of beam position and ellipticity
NASA Astrophysics Data System (ADS)
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Long-term care physical environments--effect on medication errors.
Mahmood, Atiya; Chaudhury, Habib; Gaumont, Alana; Rust, Tiana
2012-01-01
Few studies examine physical environmental factors and their effects on staff health, effectiveness, work errors and job satisfaction. To address this gap, this study aims to examine environmental features and their role in medication and nursing errors in long-term care facilities. A mixed methodological strategy was used. Data were collected via focus groups, observing medication preparation and administration, and a nursing staff survey in four facilities. The paper reveals that, during the medication preparation phase, physical design, such as medication room layout, is a major source of potential errors. During medication administration, social environment is more likely to contribute to errors. Interruptions, noise and staff shortages were particular problems. The survey's relatively small sample size needs to be considered when interpreting the findings. Also, actual error data could not be included as existing records were incomplete. The study offers several relatively low-cost recommendations to help staff reduce medication errors. Physical environmental factors are important when addressing measures to reduce errors. The findings of this study underscore the fact that the physical environment's influence on the possibility of medication errors is often neglected. This study contributes to the scarce empirical literature examining the relationship between physical design and patient safety.
One Subject, Two Lands: My Journey in Condensed Matter Physics
NASA Astrophysics Data System (ADS)
Ramakrishnan, T. V.
2016-03-01
This is an account of a professional life in the field that was generally known as solid-state physics when I started working in it; India and the United States of America are the countries in which this life was largely played out. My attempts to understand various things in condensed matter physics, and efforts to put together people and activities in India in this field, are mainly the story.
NASA Astrophysics Data System (ADS)
Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger
2007-12-01
Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Erik; Blume-Kohout, Robin; Rudinger, Kenneth
PyGSTi is an implementation of Gate Set Tomography in the python programming language. Gate Set Tomography (GST) is a theory and protocol for simultaneously estimating the state preparation, gate operations, and measurement effects of a physical system of one or many quantum bits (qubits). These estimates are based entirely on the statistics of experimental measurements, and their interpretation and analysis can provide a detailed understanding of the types of errors/imperfections in the physical system. In this way, GST provides not only a means of certifying the "goodness" of qubits but also a means of debugging (i.e. improving) them.
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.
2004-01-01
One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.
Prophylactic Bracing Has No Effect on Lower Extremity Alignment or Functional Performance.
Hueber, Garrett A; Hall, Emily A; Sage, Brad W; Docherty, Carrie L
2017-07-01
Prophylactic ankle bracing is commonly used during physical activity. Understanding how bracing affects body mechanics is critically important when discussing both injury prevention and sport performance. The purpose is to determine if ankle bracing affects lower extremity mechanics during the Landing Error Scoring System test (LESS) and Sage Sway Index (SSI). Thirty physically active participants volunteered for this study. Participants completed the LESS and SSI in both a braced and unsupported conditions. Total errors were recorded for the LESS. Total errors and time (seconds) were recorded for the SSI. The Wilcoxon signed-rank test was utilized to evaluate any differences between the brace conditions for each dependent variable. A priori alpha level was set at p<0.05. The Wilcoxon signed-rank test yielded no significant difference between the braced and unsupported conditions for the LESS (Z=-0.35, p=0.72), SSI time (Z=-0.36, p=0.72), or SSI Errors (Z=-0.37, p=0.71). Ankle braces had no effect on subjective clinical assessments of lower extremity alignment or postural stability. Utilization of a prophylactic support at the ankle did not substantially alter the proximal components of the lower kinetic chain. © Georg Thieme Verlag KG Stuttgart · New York.
WE-DE-206-02: MRI Hardware - Magnet, Gradient, RF Coils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocharian, A.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
WE-DE-206-04: MRI Pulse Sequences - Spin Echo, Gradient Echo, EPI, Non-Cartesia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pooley, R.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
WE-DE-206-01: MRI Signal in Biological Tissues - Proton, Spin, T1, T2, T2*
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorny, K.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
[Surveillance of health care errors. An overview of the published data in Argentina].
Codermatz, Marcela A; Trillo, Carolina; Berenstein, Graciela; Ortiz, Zulma
2006-01-01
In the last decades, public health surveillance extended its scope of study to new fields, such as medical errors, in order to improve patient safety. This study was aimed to review all the evidence produced in Argentina about the surveillance of medical errors. An exhaustive literature search was performed. A total of 4656 abstracts were assessed (150 MEDLINE, 145 LILACS, and 4361 hand searched abstracts). Of them, 52 were analysed and 8 were considered relevant for health care error surveillance. Different approaches were used to study medical errors. Some of them have focused on patient safety and others on medical malpractice. There is still a need to improve the surveillance of this type of event. Mainly, the quality reports of study design and surveillance attributes were unclear. A critical appraisal and synthesis of all relevant studies on health care errors may help to understand not only the state of the art, but also to define research priorities.
Sensitivity analysis of non-cohesive sediment transport formulae
NASA Astrophysics Data System (ADS)
Pinto, Lígia; Fortunato, André B.; Freire, Paula
2006-10-01
Sand transport models are often based on semi-empirical equilibrium transport formulae that relate sediment fluxes to physical properties such as velocity, depth and characteristic sediment grain sizes. In engineering applications, errors in these physical properties affect the accuracy of the sediment fluxes. The present analysis quantifies error propagation from the input physical properties to the sediment fluxes, determines which ones control the final errors, and provides insight into the relative strengths, weaknesses and limitations of four total load formulae (Ackers and White, Engelund and Hansen, van Rijn, and Karim and Kennedy) and one bed load formulation (van Rijn). The various sources of uncertainty are first investigated individually, in order to pinpoint the key physical properties that control the errors. Since the strong non-linearity of most sand transport formulae precludes analytical approaches, a Monte Carlo method is validated and used in the analysis. Results show that the accuracy in total sediment transport evaluations is mainly determined by errors in the current velocity and in the sediment median grain size. For the bed load transport using the van Rijn formula, errors in the current velocity alone control the final accuracy. In a final set of tests, all physical properties are allowed to vary simultaneously in order to analyze the combined effect of errors. The combined effect of errors in all the physical properties is then compared to an estimate of the errors due to the intrinsic limitations of the formulae. Results show that errors in the physical properties can be dominant for typical uncertainties associated with these properties, particularly for small depths. A comparison between the various formulae reveals that the van Rijn formula is more sensitive to basic physical properties. Hence, it should only be used when physical properties are known with precision.
[Understanding mistake-proofing].
de Saint Maurice, G; Giraud, N; Ausset, S; Auroy, Y; Lenoir, B; Amalberti, R
2011-01-01
The mistake-proofing concept often refers to physical devices that prevent actors from making a wrong action. In anaesthesiology, one immediately thinks to specific design of outlets for medical gases. More generally, the principle of mistake-proofing is to avoid an error, by placing knowledge in the world rather than knowledge in the head. As it often happens in risk management, healthcare has received information transfers from the industry. Computer is changing the concept of mistake-proofing, initially based on physical design, such as aerospace and automotive industry. The mistake-proofing concept may be applied to prevention, detection, and mitigation of errors. The forcing functions are a specific part of mistake-proofing: they prevent a wrong action or they force a virtuous one. Grout proposes a little shortcut to identify mistake-proofing devices: "If it is not possible to picture it in action, it is probably not a mistake-proofing device". Copyright © 2010 Elsevier Masson SAS. All rights reserved.
Pulsars and Acceleration Sites
NASA Technical Reports Server (NTRS)
Harding, Alice
2008-01-01
Rotation-powered pulsars are excellent laboratories for the studying particle acceleration as well as fundamental physics of strong gravity, strong magnetic fields and relativity. But even forty years after their discovery, we still do not understand their pulsed emission at any wavelength. I will review both the basic physics of pulsars as well as the latest developments in understanding their high-energy emission. Special and general relativistic effects play important roles in pulsar emission, from inertial frame-dragging near the stellar surface to aberration, time-of-flight and retardation of the magnetic field near the light cylinder. Understanding how these effects determine what we observe at different wavelengths is critical to unraveling the emission physics. Fortunately the Gamma-Ray Large Area Space Telescope (GLAST), with launch in May 2008 will detect many new gamma-ray pulsars and test the predictions of these models with unprecedented sensitivity and energy resolution for gamma-rays in the range of 30 MeV to 300 GeV.
NASA Astrophysics Data System (ADS)
Ne'Eman, Yuval
2003-08-01
The recently developed Irreversible Quantum Mechanics formalism describes physical reality both at the statistical and the particle levels and voices have been heard suggesting that it be used in fundamental physics. Two examples are sketched in which similar steps were taken and proved to be terrible errors: Aristotle's rejection of the vacuum because "nature does not tolerate it", replacing it by a law of force linear in velocity and Chew's rejection of Quantum Field Theory because "it is not unitary off-mass-shell". In Particle Physics, I suggest using the new representation as an "effective" picture without abandoning the canonical background.
Edgecomb, S J; Norton, K I
2006-05-01
Sports scientists require a thorough understanding of the energy demands of sports and physical activities so that optimal training strategies and game simulations can be constructed. A range of techniques has been used to both directly assess and estimate the physiological and biochemical changes during competition. A fundamental approach to understanding the contribution of the energy systems in physical activity has involved the use of time-motion studies. A number of tools have been used from simple pen and paper methods, the use of video recordings, to sophisticated electronic tracking devices. Depending on the sport, there may be difficulties in using electronic tracking devices because of concerns of player safety. This paper assesses two methods currently used to measure player movement patterns during competition: (1) global positioning technology (GPS) and (2) a computer-based tracking (CBT) system that relies on a calibrated miniaturised playing field and mechanical movements of the tracker. A range of ways was used to determine the validity and reliability of these methods for tracking Australian footballers for distance covered during games. Comparisons were also made between these methods. The results indicate distances measured using CBT overestimated the actual values (measured with a calibrated trundle wheel) by an average of about 5.8%. The GPS system overestimated the actual values by about 4.8%. Distances measured using CBT in experienced hands were as accurate as the GPS technology. Both systems showed relatively small errors in true distances.
Development and Application of Predictive Tools for MHD Stability Limits in Tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brennan, Dylan; Miller, G. P.
This is a project to develop and apply analytic and computational tools to answer physics questions relevant to the onset of non-ideal magnetohydrodynamic (MHD) instabilities in toroidal magnetic confinement plasmas. The focused goal of the research is to develop predictive tools for these instabilities, including an inner layer solution algorithm, a resistive wall with control coils, and energetic particle effects. The production phase compares studies of instabilities in such systems using analytic techniques, PEST- III and NIMROD. Two important physics puzzles are targeted as guiding thrusts for the analyses. The first is to form an accurate description of the physicsmore » determining whether the resistive wall mode or a tearing mode will appear first as β is increased at low rotation and low error fields in DIII-D. The second is to understand the physical mechanism behind recent NIMROD results indicating strong damping and stabilization from energetic particle effects on linear resistive modes. The work seeks to develop a highly relevant predictive tool for ITER, advance the theoretical description of this physics in general, and analyze these instabilities in experiments such as ASDEX Upgrade, DIII-D, JET, JT-60U and NTSX. The awardee on this grant is the University of Tulsa. The research efforts are supervised principally by Dr. Brennan. Support is included for two graduate students, and a strong collaboration with Dr. John M. Finn of LANL. The work includes several ongoing collaborations with General Atomics, PPPL, and the NIMROD team, among others.« less
Test of understanding of vectors: A reliable multiple-choice vector concept test
NASA Astrophysics Data System (ADS)
Barniol, Pablo; Zavala, Genaro
2014-06-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended problems in which a total of 2067 students participated. Using this taxonomy, we then designed a 20-item multiple-choice test [Test of understanding of vectors (TUV)] and administered it in English to 423 students who were completing the required sequence of introductory physics courses at a large private Mexican university. We evaluated the test's content validity, reliability, and discriminatory power. The results indicate that the TUV is a reliable assessment tool. We also conducted a detailed analysis of the students' understanding of the vector concepts evaluated in the test. The TUV is included in the Supplemental Material as a resource for other researchers studying vector learning, as well as instructors teaching the material.
NASA Astrophysics Data System (ADS)
Haule, Kristjan
2018-04-01
The Dynamical Mean Field Theory (DMFT) in combination with the band structure methods has been able to address reach physics of correlated materials, such as the fluctuating local moments, spin and orbital fluctuations, atomic multiplet physics and band formation on equal footing. Recently it is getting increasingly recognized that more predictive ab-initio theory of correlated systems needs to also address the feedback effect of the correlated electronic structure on the ionic positions, as the metal-insulator transition is almost always accompanied with considerable structural distortions. We will review recently developed extension of merger between the Density Functional Theory (DFT) and DMFT method, dubbed DFT+ embedded DMFT (DFT+eDMFT), whichsuccessfully addresses this challenge. It is based on the stationary Luttinger-Ward functional to minimize the numerical error, it subtracts the exact double-counting of DFT and DMFT, and implements self-consistent forces on all atoms in the unit cell. In a few examples, we will also show how the method elucidated the important feedback effect of correlations on crystal structure in rare earth nickelates to explain the mechanism of the metal-insulator transition. The method showed that such feedback effect is also essential to understand the dynamic stability of the high-temperature body-centered cubic phase of elemental iron, and in particular it predicted strong enhancement of the electron-phonon coupling over DFT values in FeSe, which was very recently verified by pioneering time-domain experiment.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-07
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics
Simulations in Medicine and Biology: Insights and perspectives
NASA Astrophysics Data System (ADS)
Spyrou, George M.
2015-01-01
Modern medicine and biology have been transformed into quantitative sciences of high complexity, with challenging objectives. The aims of medicine are related to early diagnosis, effective therapy, accurate intervention, real time monitoring, procedures/systems/instruments optimization, error reduction, and knowledge extraction. Concurrently, following the explosive production of biological data concerning DNA, RNA, and protein biomolecules, a plethora of questions has been raised in relation to their structure and function, the interactions between them, their relationships and dependencies, their regulation and expression, their location, and their thermodynamic characteristics. Furthermore, the interplay between medicine and biology gives rise to fields like molecular medicine and systems biology which are further interconnected with physics, mathematics, informatics, and engineering. Modelling and simulation is a powerful tool in the fields of Medicine and Biology. Simulating the phenomena hidden inside a diagnostic or therapeutic medical procedure, we are able to obtain control on the whole system and perform multilevel optimization. Furthermore, modelling and simulation gives insights in the various scales of biological representation, facilitating the understanding of the huge amounts of derived data and the related mechanisms behind them. Several examples, as well as the insights and the perspectives of simulations in biomedicine will be presented.
Terminator field-aligned current system: A new finding from model-assimilated data set (MADS)
NASA Astrophysics Data System (ADS)
Zhu, L.; Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Gardner, L. C.; Eccles, J. V.; Rice, D.
2013-12-01
Physics-based data assimilation models have been recognized by the space science community as the most accurate approach to specify and forecast the space weather of the solar-terrestrial environment. The model-assimilated data sets (MADS) produced by these models constitute an internally consistent time series of global three-dimensional fields whose accuracy can be estimated. Because of its internal consistency of physics and completeness of descriptions on the status of global systems, the MADS has also been a powerful tool to identify the systematic errors in measurements, reveal the missing physics in physical models, and discover the important dynamical physical processes that are inadequately observed or missed by measurements due to observational limitations. In the past years, we developed a data assimilation model for the high-latitude ionospheric plasma dynamics and electrodynamics. With a set of physical models, an ensemble Kalman filter, and the ingestion of data from multiple observations, the data assimilation model can produce a self-consistent time-series of the complete descriptions of the global high-latitude ionosphere, which includes the convection electric field, horizontal and field-aligned currents, conductivity, as well as 3-D plasma densities and temperatures, In this presentation, we will show a new field-aligned current system discovered from the analysis of the MADS produced by our data assimilation model. This new current system appears and develops near the ionospheric terminator. The dynamical features of this current system will be described and its connection to the active role of the ionosphere in the M-I coupling will be discussed.
Simbios: an NIH national center for physics-based simulation of biological structures.
Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A; Altman, Russ B
2012-01-01
Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations.
Kirlian Photography as a Teaching Tool of Physics
NASA Astrophysics Data System (ADS)
Terrel, Andy; Thacker, Beth Ann, , Dr.
2002-10-01
There are a number of groups across the country working on redesigning introductory physics courses by incorporating physics education research, modeling, and making the courses appeal to students in broader fields. We spent the summer exploring Kirlian photography, a subject that can be understood by students with a basic comprehension of electrostatics but is still questioned by many people in other fields. Kirlian photography's applications have captivated alternative medicine but still requires research from both physics and biology to understand if it has potential as medical tool. We used a simple setup to reproduce the physics that has been done to see if it could be used in an educational setting. I will demonstrate how Kirlian photography can be explained by physics but also how the topic still needs research to completely understand its possible biological applications. By incorporating such a topic into a curriculum, one is able to teach students to explore supposed supernatural phenomena scientifically and to promote research among undergraduate students.
Simbios: an NIH national center for physics-based simulation of biological structures
Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A
2011-01-01
Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations. PMID:22081222
Problem-Based Learning in Physics: The Power of Students Teaching Students.
ERIC Educational Resources Information Center
Duch, Barbara J.
1996-01-01
Describes an honors general physics course designed to demonstrate to students that physics is vital to their understanding of physiology, medicine, the human body, rehabilitation, and other health fields. Presents evidence that indicates that active group learning and connections to real-world applications help students learn physics and apply…
Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens
NASA Astrophysics Data System (ADS)
Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl
2016-01-01
As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.
Effect of neoclassical toroidal viscosity on error-field penetration thresholds in tokamak plasmas.
Cole, A J; Hegna, C C; Callen, J D
2007-08-10
A model for field-error penetration is developed that includes nonresonant as well as the usual resonant field-error effects. The nonresonant components cause a neoclassical toroidal viscous torque that keeps the plasma rotating at a rate comparable to the ion diamagnetic frequency. The new theory is used to examine resonant error-field penetration threshold scaling in Ohmic tokamak plasmas. Compared to previous theoretical results, we find the plasma is less susceptible to error-field penetration and locking, by a factor that depends on the nonresonant error-field amplitude.
Barriers and facilitators to recovering from e-prescribing errors in community pharmacies.
Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A
2015-01-01
To explore barriers and facilitators to recovery from e-prescribing errors in community pharmacies and to explore practical solutions for work system redesign to ensure successful recovery from errors. Cross-sectional qualitative design using direct observations, interviews, and focus groups. Five community pharmacies in Wisconsin. 13 pharmacists and 14 pharmacy technicians. Observational field notes and transcribed interviews and focus groups were subjected to thematic analysis guided by the Systems Engineering Initiative for Patient Safety (SEIPS) work system and patient safety model. Barriers and facilitators to recovering from e-prescription errors in community pharmacies. Organizational factors, such as communication, training, teamwork, and staffing levels, play an important role in recovering from e-prescription errors. Other factors that could positively or negatively affect recovery of e-prescription errors include level of experience, knowledge of the pharmacy personnel, availability or usability of tools and technology, interruptions and time pressure when performing tasks, and noise in the physical environment. The SEIPS model sheds light on key factors that may influence recovery from e-prescribing errors in pharmacies, including the environment, teamwork, communication, technology, tasks, and other organizational variables. To be successful in recovering from e-prescribing errors, pharmacies must provide the appropriate working conditions that support recovery from errors.
Types and Characteristics of Data for Geomagnetic Field Modeling
NASA Technical Reports Server (NTRS)
Langel, R. A. (Editor); Baldwin, R. T. (Editor)
1992-01-01
Given here is material submitted at a symposium convened on Friday, August 23, 1991, at the General Assembly of the International Union of Geodesy and Geophysics (IUGG) held in Vienna, Austria. Models of the geomagnetic field are only as good as the data upon which they are based, and depend upon correct understanding of data characteristics such as accuracy, correlations, systematic errors, and general statistical properties. This symposium was intended to expose and illuminate these data characteristics.
Physical fault tolerance of nanoelectronics.
Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N
2011-04-29
The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.
NASA Astrophysics Data System (ADS)
Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw
2005-09-01
Sound field reproduction finds applications in listening to prerecorded music or in synthesizing virtual acoustics. The objective is to recreate a sound field in a listening environment. Wave field synthesis (WFS) is a known open-loop technology which assumes that the reproduction environment is anechoic. Classical WFS, therefore, does not perform well in a real reproduction space such as room. Previous work has suggested that it is physically possible to reproduce a progressive wave field in-room situation using active control approaches. In this paper, a formulation of adaptive wave field synthesis (AWFS) introduces practical possibilities for an adaptive sound field reproduction combining WFS and active control (with WFS departure penalization) with a limited number of error sensors. AWFS includes WFS and closed-loop ``Ambisonics'' as limiting cases. This leads to the modification of the multichannel filtered-reference least-mean-square (FXLMS) and the filtered-error LMS (FELMS) adaptive algorithms for AWFS. Decentralization of AWFS for sound field reproduction is introduced on the basis of sources' and sensors' radiation modes. Such decoupling may lead to decentralized control of source strength distributions and may reduce computational burden of the FXLMS and the FELMS algorithms used for AWFS. [Work funded by NSERC, NATEQ, Université de Sherbrooke and VRQ.] Ultrasound/Bioresponse to
Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report
Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo
2013-01-01
Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451
MO-A-9A-01: Innovation in Medical Physics Practice: 3D Printing Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehler, E; Perks, J; Rasmussen, K
2014-06-15
3D printing, also called additive manufacturing, has great potential to advance the field of medicine. Many medical uses have been exhibited from facial reconstruction to the repair of pulmonary obstructions. The strength of 3D printing is to quickly convert a 3D computer model into a physical object. Medical use of 3D models is already ubiquitous with technologies such as computed tomography and magnetic resonance imaging. Thus tailoring 3D printing technology to medical functions has the potential to impact patient care. This session will discuss applications to the field of Medical Physics. Topics discussed will include introduction to 3D printing methodsmore » as well as examples of real-world uses of 3D printing spanning clinical and research practice in diagnostic imaging and radiation therapy. The session will also compare 3D printing to other manufacturing processes and discuss a variety of uses of 3D printing technology outside the field of Medical Physics. Learning Objectives: Understand the technologies available for 3D Printing Understand methods to generate 3D models Identify the benefits and drawbacks to rapid prototyping / 3D Printing Understand the potential issues related to clinical use of 3D Printing.« less
A Worksheet to Enhance Students’ Conceptual Understanding in Vector Components
NASA Astrophysics Data System (ADS)
Wutchana, Umporn; Emarat, Narumon
2017-09-01
With and without physical context, we explored 59 undergraduate students’conceptual and procedural understanding of vector components using both open ended problems and multiple choice items designed based on research instruments used in physics education research. The results showed that a number of students produce errors and revealed alternative conceptions especially when asked to draw graphical form of vector components. It indicated that most of them did not develop a strong foundation of understanding in vector components and could not apply those concepts to such problems with physical context. Based on the findings, we designed a worksheet to enhance the students’ conceptual understanding in vector components. The worksheet is composed of three parts which help students to construct their own understanding of definition, graphical form, and magnitude of vector components. To validate the worksheet, focus group discussions of 3 and 10 graduate students (science in-service teachers) had been conducted. The modified worksheet was then distributed to 41 grade 9 students in a science class. The students spent approximately 50 minutes to complete the worksheet. They sketched and measured vectors and its components and compared with the trigonometry ratio to condense the concepts of vector components. After completing the worksheet, their conceptual model had been verified. 83% of them constructed the correct model of vector components.
A Missile-Borne Angular Velocity Sensor Based on Triaxial Electromagnetic Induction Coils
Li, Jian; Wu, Dan; Han, Yan
2016-01-01
Aiming to solve the problem of the limited measuring range for angular motion parameters of high-speed rotating projectiles in the field of guidance and control, a self-adaptive measurement method for angular motion parameters based on the electromagnetic induction principle is proposed. First, a framework with type bent “I-shape” is used to design triaxial coils in a mutually orthogonal way. Under the condition of high rotational speed of a projectile, the induction signal of the projectile moving across a geomagnetic field is acquired by using coils. Second, the frequency of the pulse signal is adjusted self-adaptively. Angular velocity and angular displacement are calculated in the form of periodic pulse counting and pulse accumulation, respectively. Finally, on the basis of that principle prototype of the sensor is researched and developed, performance of measuring angular motion parameters are tested on the sensor by semi-physical and physical simulation experiments, respectively. Experimental results demonstrate that the sensor has a wide measuring range of angular velocity from 1 rps to 100 rps with a measurement error of less than 0.3%, and the angular displacement measurement error is lower than 0.2°. The proposed method satisfies measurement requirements for high-speed rotating projectiles with an extremely high dynamic range of rotational speed and high precision, and has definite value to engineering applications in the fields of attitude determination and geomagnetic navigation. PMID:27706039
A Missile-Borne Angular Velocity Sensor Based on Triaxial Electromagnetic Induction Coils.
Li, Jian; Wu, Dan; Han, Yan
2016-09-30
Aiming to solve the problem of the limited measuring range for angular motion parameters of high-speed rotating projectiles in the field of guidance and control, a self-adaptive measurement method for angular motion parameters based on the electromagnetic induction principle is proposed. First, a framework with type bent "I-shape" is used to design triaxial coils in a mutually orthogonal way. Under the condition of high rotational speed of a projectile, the induction signal of the projectile moving across a geomagnetic field is acquired by using coils. Second, the frequency of the pulse signal is adjusted self-adaptively. Angular velocity and angular displacement are calculated in the form of periodic pulse counting and pulse accumulation, respectively. Finally, on the basis of that principle prototype of the sensor is researched and developed, performance of measuring angular motion parameters are tested on the sensor by semi-physical and physical simulation experiments, respectively. Experimental results demonstrate that the sensor has a wide measuring range of angular velocity from 1 rps to 100 rps with a measurement error of less than 0.3%, and the angular displacement measurement error is lower than 0.2°. The proposed method satisfies measurement requirements for high-speed rotating projectiles with an extremely high dynamic range of rotational speed and high precision, and has definite value to engineering applications in the fields of attitude determination and geomagnetic navigation.
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
The Interface between Physics and Biology: An Unexplored Territory.
ERIC Educational Resources Information Center
Marx, George
1980-01-01
Discusses from the physicist's point of view the connection between biology and physics and the usefulness of physical laws for understanding biological processes. Discusses these fields of research in secondary school science: molecular science, regulation, statistics and information, corrosion and evolution, chance and necessity, and…
A review of multimodel superensemble forecasting for weather, seasonal climate, and hurricanes
NASA Astrophysics Data System (ADS)
Krishnamurti, T. N.; Kumar, V.; Simon, A.; Bhardwaj, A.; Ghosh, T.; Ross, R.
2016-06-01
This review provides a summary of work in the area of ensemble forecasts for weather, climate, oceans, and hurricanes. This includes a combination of multiple forecast model results that does not dwell on the ensemble mean but uses a unique collective bias reduction procedure. A theoretical framework for this procedure is provided, utilizing a suite of models that is constructed from the well-known Lorenz low-order nonlinear system. A tutorial that includes a walk-through table and illustrates the inner workings of the multimodel superensemble's principle is provided. Systematic errors in a single deterministic model arise from a host of features that range from the model's initial state (data assimilation), resolution, representation of physics, dynamics, and ocean processes, local aspects of orography, water bodies, and details of the land surface. Models, in their diversity of representation of such features, end up leaving unique signatures of systematic errors. The multimodel superensemble utilizes as many as 10 million weights to take into account the bias errors arising from these diverse features of multimodels. The design of a single deterministic forecast models that utilizes multiple features from the use of the large volume of weights is provided here. This has led to a better understanding of the error growths and the collective bias reductions for several of the physical parameterizations within diverse models, such as cumulus convection, planetary boundary layer physics, and radiative transfer. A number of examples for weather, seasonal climate, hurricanes and sub surface oceanic forecast skills of member models, the ensemble mean, and the superensemble are provided.
Using health psychology techniques to manage chronic physical symptoms.
Barley, Elizabeth; Lawson, Victoria
2016-12-08
Chest pain and palpitations, non-malignant pain, breathlessness and fatigue often endure despite the receipt of appropriate nursing and medical care. This is distressing for patients, impacts on their quality of life and ability to function and is associated with high healthcare usage and costs. The cognitive behavioural approach offers nurses a model to understand how people's perceptions and beliefs and their emotional, behavioural and physiological reactions are linked. Common 'thinking errors' which can exacerbate symptom severity and impact are highlighted. Understanding of this model may help nurses to help patients cope better with their symptoms by helping them to come up with alternative more helpful beliefs and practices. Many Improving Access to Psychological Therapy services offer support to people with chronic physical symptoms and nurses are encouraged to sign post patients to them.
Effects of the local structure dependence of evaporation fields on field evaporation behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Lan; Marquis, Emmanuelle A., E-mail: emarq@umich.edu; Withrow, Travis
2015-12-14
Accurate three dimensional reconstructions of atomic positions and full quantification of the information contained in atom probe microscopy data rely on understanding the physical processes taking place during field evaporation of atoms from needle-shaped specimens. However, the modeling framework for atom probe microscopy has only limited quantitative justification. Building on the continuum field models previously developed, we introduce a more physical approach with the selection of evaporation events based on density functional theory calculations. This model reproduces key features observed experimentally in terms of sequence of evaporation, evaporation maps, and depth resolution, and provides insights into the physical limit formore » spatial resolution.« less
ERIC Educational Resources Information Center
Evans, John; Penney, Dawn
2008-01-01
Background: This paper develops an analysis of how "educability" and "physical ability" are socially configured through the practices of physical education (PE) in schools. We pursue this interest as part of a broader project, shared by many in the wider community of social science researchers in PE, to better understand how…
Numerical Study of Plasmonic Efficiency of Gold Nanostripes for Molecule Detection
2015-01-01
In plasmonics, the accurate computation of the electromagnetic field enhancement is necessary in determining the amplitude and the spatial extension of the field around nanostructures. Here, the problem of the interaction between an electromagnetic excitation and gold nanostripes is solved. An optimization scheme, including an adaptive remeshing process with error estimator, is used to solve the problem through a finite element method. The variations of the electromagnetic field amplitude and the plasmonic active zones around nanostructures for molecule detection are studied in this paper taking into account the physical and geometrical parameters of the nanostripes. The evolution between the sizes and number of nanostripes is shown. PMID:25734184
NASA Technical Reports Server (NTRS)
Ballabrera-Poy, J.; Busalacchi, A.; Murtugudde, R.
2000-01-01
A reduced order Kalman Filter, based on a simplification of the Singular Evolutive Extended Kalman (SEEK) filter equations, is used to assimilate observed fields of the surface wind stress, sea surface temperature and sea level into the nonlinear coupled ocean-atmosphere model of Zebiak and Cane. The SEEK filter projects the Kalman Filter equations onto a subspace defined by the eigenvalue decomposition of the error forecast matrix, allowing its application to high dimensional systems. The Zebiak and Cane model couples a linear reduced gravity ocean model with a single vertical mode atmospheric model of Zebiak. The compatibility between the simplified physics of the model and each observed variable is studied separately and together. The results show the ability of the model to represent the simultaneous value of the wind stress, SST and sea level, when the fields are limited to the latitude band 10 deg S - 10 deg N In this first application of the Kalman Filter to a coupled ocean-atmosphere prediction model, the sea level fields are assimilated in terms of the Kelvin and Rossby modes of the thermocline depth anomaly. An estimation of the error of these modes is derived from the projection of an estimation of the sea level error over such modes. This method gives a value of 12 for the error of the Kelvin amplitude, and 6 m of error for the Rossby component of the thermocline depth. The ability of the method to reconstruct the state of the equatorial Pacific and predict its time evolution is demonstrated. The method is shown to be quite robust for predictions up to six months, and able to predict the onset of the 1997 warm event fifteen months before its occurrence.
NASA Technical Reports Server (NTRS)
Ballabrera-Poy, Joaquim; Busalacchi, Antonio J.; Murtugudde, Ragu
2000-01-01
A reduced order Kalman Filter, based on a simplification of the Singular Evolutive Extended Kalman (SEEK) filter equations, is used to assimilate observed fields of the surface wind stress, sea surface temperature and sea level into the nonlinear coupled ocean-atmosphere model. The SEEK filter projects the Kalman Filter equations onto a subspace defined by the eigenvalue decomposition of the error forecast matrix, allowing its application to high dimensional systems. The Zebiak and Cane model couples a linear reduced gravity ocean model with a single vertical mode atmospheric model of Zebiak. The compatibility between the simplified physics of the model and each observed variable is studied separately and together. The results show the ability of the model to represent the simultaneous value of the wind stress, SST and sea level, when the fields are limited to the latitude band 10 deg S - 10 deg N. In this first application of the Kalman Filter to a coupled ocean-atmosphere prediction model, the sea level fields are assimilated in terms of the Kelvin and Rossby modes of the thermocline depth anomaly. An estimation of the error of these modes is derived from the projection of an estimation of the sea level error over such modes. This method gives a value of 12 for the error of the Kelvin amplitude, and 6 m of error for the Rossby component of the thermocline depth. The ability of the method to reconstruct the state of the equatorial Pacific and predict its time evolution is demonstrated. The method is shown to be quite robust for predictions I up to six months, and able to predict the onset of the 1997 warm event fifteen months before its occurrence.
The graduate research field choice of women in academic physics and astronomy: A pilot study
NASA Astrophysics Data System (ADS)
Barthelemy, Ramón S.; Grunert, Megan L.; Henderson, Charles R.
2013-01-01
The low representation of women in physics is apparent at the undergraduate level through faculty positions. However, when looking at the percentage of PhD women graduates in the closely related field astronomy (40%) and women PhDs in physics education research (30%), it is found that those areas have higher representations of women compared to women physics PhD graduates (18%). This study seeks to understand the research subfield choice of women in academic physics and astronomy at large US research universities through in-depth interviews and a grounded theory analytical approach. Though preliminary results have not shown why women chose their graduate research field, they have shown that positive pre-college experiences are bringing these women to physics, while supportive advisors and collaboration amongst students are encouraging these women to persist.
ERIC Educational Resources Information Center
Clay, Tansy W.; Fox, Jennifer B.; Grunbaum, Daniel; Jumars, Peter A.
2008-01-01
The authors have developed and field-tested high school-level curricular materials that guide students to use biology, mathematics, and physics to understand plankton and how these tiny organisms move in a world where their intuition does not apply. The authors chose plankton as the focus of their materials primarily because the challenges faced…
Predicting scattering scanning near-field optical microscopy of mass-produced plasmonic devices
NASA Astrophysics Data System (ADS)
Otto, Lauren M.; Burgos, Stanley P.; Staffaroni, Matteo; Ren, Shen; Süzer, Özgün; Stipe, Barry C.; Ashby, Paul D.; Hammack, Aeron T.
2018-05-01
Scattering scanning near-field optical microscopy enables optical imaging and characterization of plasmonic devices with nanometer-scale resolution well below the diffraction limit. This technique enables developers to probe and understand the waveguide-coupled plasmonic antenna in as-fabricated heat-assisted magnetic recording heads. In order to validate and predict results and to extract information from experimental measurements that is physically comparable to simulations, a model was developed to translate the simulated electric field into expected near-field measurements using physical parameters specific to scattering scanning near-field optical microscopy physics. The methods used in this paper prove that scattering scanning near-field optical microscopy can be used to determine critical sub-diffraction-limited dimensions of optical field confinement, which is a crucial metrology requirement for the future of nano-optics, semiconductor photonic devices, and biological sensing where the near-field character of light is fundamental to device operation.
Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H
2009-09-01
Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.
NASA Astrophysics Data System (ADS)
Escalada, Lawrence T.; Moeller, Julia K.
2006-02-01
With the existing shortage of qualified high school physics teachers and the current mandate of the No Child Left Behind Act requiring teachers to be "highly qualified" in all subjects they teach, university physics departments must offer content courses and programs that would allow out-of-field high school physics teachers to meet this requirement. This paper will identify how the University of Northern Iowa Physics Department is attempting to address the needs of the high school physics teacher through its course offerings and professional development programs for teachers. The effectiveness of one such physics professional development program, the UNI Physics Institute (UNI-PI), on secondary science teachers' and their students' conceptual understanding of Newtonian mechanics, and the teachers' instructional practices was investigated. Twenty-one Iowa out-of-field high school physics teachers participating in the program were able to complete the physics coursework required to obtain the State of Iowa 7-12 Grade Physics Teaching endorsement. Twelve of the participants completed a two-year program during the 2002 and 2003 summers. Background information, pre- and post-test physics conceptual assessments and other data was collected from participants throughout the Institute. Participants collected pre and post-test conceptual assessment data from their students during the 2002-2003 and 2003-2004 academic years. This comprehensive assessment data revealed the Institute's influence on participants' and students' conceptual understanding of Newtonian Mechanics. The results of this investigation, the insights we have gained, and possible future directions for professional development will be shared.
Simulation study on combination of GRACE monthly gravity field solutions
NASA Astrophysics Data System (ADS)
Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian
2016-04-01
The GRACE monthly gravity fields from different processing centers are combined in the frame of the project EGSIEM. This combination is done on solution level first to define weights which will be used for a combination on normal equation level. The applied weights are based on the deviation of the individual gravity fields from the arithmetic mean of all involved gravity fields. This kind of weighting scheme relies on the assumption that the true gravity field is close to the arithmetic mean of the involved individual gravity fields. However, the arithmetic mean can be affected by systematic errors in individual gravity fields, which consequently results in inappropriate weights. For the future operational scientific combination service of GRACE monthly gravity fields, it is necessary to examine the validity of the weighting scheme also in possible extreme cases. To investigate this, we make a simulation study on the combination of gravity fields. Firstly, we show how a deviated gravity field can affect the combined solution in terms of signal and noise in the spatial domain. We also show the impact of systematic errors in individual gravity fields on the resulting combined solution. Then, we investigate whether the weighting scheme still works in the presence of outliers. The result of this simulation study will be useful to understand and validate the weighting scheme applied to the combination of the monthly gravity fields.
The past, present and future of pulsars
NASA Astrophysics Data System (ADS)
Bell Burnell, Jocelyn
2017-12-01
On the 50th anniversary of the accidental discovery of pulsars (pulsating radio stars, also known as neutron stars) I reflect on the process of their detection and how our understanding of these stars gradually grew. Fifty years on, we have a much better (but still incomplete) understanding of these extreme objects, which I summarize here. The study of pulsars is advancing several areas of fundamental physics, including general relativity, particle physics, condensed-matter physics, and radiation processes in extreme electric and magnetic fields. New observational facilities coming online in the radio regime (such as the Five hundred meter Aperture Spherical Telescope and the Square Kilometre Array precursors) will revolutionize the search for pulsars by accessing thousands more, thus ushering in a new era of discovery for the field.
Model for intensity calculation in electron guns
NASA Astrophysics Data System (ADS)
Doyen, O.; De Conto, J. M.; Garnier, J. P.; Lefort, M.; Richard, N.
2007-04-01
The calculation of the current in an electron gun structure is one of the main investigations involved in the electron gun physics understanding. In particular, various simulation codes exist but often present some important discrepancies with experiments. Moreover, those differences cannot be reduced because of the lack of physical information in these codes. We present a simple physical three-dimensional model, valid for all kinds of gun geometries. This model presents a better precision than all the other simulation codes and models encountered and allows the real understanding of the electron gun physics. It is based only on the calculation of the Laplace electric field at the cathode, the use of the classical Child-Langmuir's current density, and a geometrical correction to this law. Finally, the intensity versus voltage characteristic curve can be precisely described with only a few physical parameters. Indeed, we have showed that only the shape of the electric field at the cathode without beam, and a distance of an equivalent infinite planar diode gap, govern mainly the electron gun current generation.
Human factors in surgery: from Three Mile Island to the operating room.
D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco
2009-01-01
Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.
Quantum Hall physics: Hierarchies and conformal field theory techniques
NASA Astrophysics Data System (ADS)
Hansson, T. H.; Hermanns, M.; Simon, S. H.; Viefers, S. F.
2017-04-01
The fractional quantum Hall effect, being one of the most studied phenomena in condensed matter physics during the past 30 years, has generated many ground-breaking new ideas and concepts. Very early on it was realized that the zoo of emerging states of matter would need to be understood in a systematic manner. The first attempts to do this, by Haldane and Halperin, set an agenda for further work which has continued to this day. Since that time the idea of hierarchies of quasiparticles condensing to form new states has been a pillar of our understanding of fractional quantum Hall physics. In the 30 years that have passed since then, a number of new directions of thought have advanced our understanding of fractional quantum Hall states and have extended it in new and unexpected ways. Among these directions is the extensive use of topological quantum field theories and conformal field theories, the application of the ideas of composite bosons and fermions, and the study of non-Abelian quantum Hall liquids. This article aims to present a comprehensive overview of this field, including the most recent developments.
Using weighted power mean for equivalent square estimation.
Zhou, Sumin; Wu, Qiuwen; Li, Xiaobo; Ma, Rongtao; Zheng, Dandan; Wang, Shuo; Zhang, Mutian; Li, Sicong; Lei, Yu; Fan, Qiyong; Hyun, Megan; Diener, Tyler; Enke, Charles
2017-11-01
Equivalent Square (ES) enables the calculation of many radiation quantities for rectangular treatment fields, based only on measurements from square fields. While it is widely applied in radiotherapy, its accuracy, especially for extremely elongated fields, still leaves room for improvement. In this study, we introduce a novel explicit ES formula based on Weighted Power Mean (WPM) function and compare its performance with the Sterling formula and Vadash/Bjärngard's formula. The proposed WPM formula is ESWPMa,b=waα+1-wbα1/α for a rectangular photon field with sides a and b. The formula performance was evaluated by three methods: standard deviation of model fitting residual error, maximum relative model prediction error, and model's Akaike Information Criterion (AIC). Testing datasets included the ES table from British Journal of Radiology (BJR), photon output factors (S cp ) from the Varian TrueBeam Representative Beam Data (Med Phys. 2012;39:6981-7018), and published S cp data for Varian TrueBeam Edge (J Appl Clin Med Phys. 2015;16:125-148). For the BJR dataset, the best-fit parameter value α = -1.25 achieved a 20% reduction in standard deviation in ES estimation residual error compared with the two established formulae. For the two Varian datasets, employing WPM reduced the maximum relative error from 3.5% (Sterling) or 2% (Vadash/Bjärngard) to 0.7% for open field sizes ranging from 3 cm to 40 cm, and the reduction was even more prominent for 1 cm field sizes on Edge (J Appl Clin Med Phys. 2015;16:125-148). The AIC value of the WPM formula was consistently lower than its counterparts from the traditional formulae on photon output factors, most prominent on very elongated small fields. The WPM formula outperformed the traditional formulae on three testing datasets. With increasing utilization of very elongated, small rectangular fields in modern radiotherapy, improved photon output factor estimation is expected by adopting the WPM formula in treatment planning and secondary MU check. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Defining health information technology-related errors: new developments since to err is human.
Sittig, Dean F; Singh, Hardeep
2011-07-25
Despite the promise of health information technology (HIT), recent literature has revealed possible safety hazards associated with its use. The Office of the National Coordinator for HIT recently sponsored an Institute of Medicine committee to synthesize evidence and experience from the field on how HIT affects patient safety. To lay the groundwork for defining, measuring, and analyzing HIT-related safety hazards, we propose that HIT-related error occurs anytime HIT is unavailable for use, malfunctions during use, is used incorrectly by someone, or when HIT interacts with another system component incorrectly, resulting in data being lost or incorrectly entered, displayed, or transmitted. These errors, or the decisions that result from them, significantly increase the risk of adverse events and patient harm. We describe how a sociotechnical approach can be used to understand the complex origins of HIT errors, which may have roots in rapidly evolving technological, professional, organizational, and policy initiatives.
Mutation Testing for Effective Verification of Digital Components of Physical Systems
NASA Astrophysics Data System (ADS)
Kushik, N. G.; Evtushenko, N. V.; Torgaev, S. N.
2015-12-01
Digital components of modern physical systems are often designed applying circuitry solutions based on the field programmable gate array technology (FPGA). Such (embedded) digital components should be carefully tested. In this paper, an approach for the verification of digital physical system components based on mutation testing is proposed. The reference description of the behavior of a digital component in the hardware description language (HDL) is mutated by introducing into it the most probable errors and, unlike mutants in high-level programming languages, the corresponding test case is effectively derived based on a comparison of special scalable representations of the specification and the constructed mutant using various logic synthesis and verification systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majer, E.L.; Brockman, F.J.
1998-06-01
'This research is an integrated physical (geophysical and hydrologic) and microbial study using innovative geophysical imaging and microbial characterization methods to identify key scales of physical heterogeneities that affect the biodynamics of natural subsurface environments. Data from controlled laboratory and in-situ experiments at the INEEL Test Area North (TAN) site are being used to determine the dominant physical characteristics (lithologic, structural, and hydrologic) that can be imaged in-situ and correlated with microbial properties. The overall goal of this research is to contribute to the understanding of the interrelationships between transport properties and spatially varying physical, chemical, and microbiological heterogeneity. Themore » outcome will be an improved understanding of the relationship between physical and microbial heterogeneity, thus facilitating the design of bioremediation strategies in similar environments. This report summarizes work as of May 1998, the second year of the project. This work is an extension of basic research on natural heterogeneity first initiated within the DOE/OHER Subsurface Science Program (SSP) and is intended to be one of the building blocks of an integrated and collaborative approach with an INEEL/PNNL effort aimed at understanding the effect of physical heterogeneity on transport properties and biodynamics in natural systems. The work is closely integrated with other EMSP projects at INEEL (Rick Colwell et al.) and PNNL (Fred Brockman and Jim Fredrickson).'« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatia, Harsh
This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thusmore » creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations.« less
USDA-ARS?s Scientific Manuscript database
Apparent soil electrical conductivity (ECa) is an efficient technique for understanding within-field variability of physical and chemical soil characteristics. Commercial devices are readily available for collecting ECa on whole fields and used broadly for crop management in precision agriculture; h...
Vortices at Microwave Frequencies
NASA Astrophysics Data System (ADS)
Silva, Enrico; Pompeo, Nicola; Dobrovolskiy, Oleksandr V.
2017-11-01
The behavior of vortices at microwave frequencies is an extremely useful source of information on the microscopic parameters that enter the description of the vortex dynamics. This feature has acquired particular relevance since the discovery of unusual superconductors, such as cuprates. Microwave investigation then extended its field of application to many families of superconductors, including the artificially nanostructured materials. It is then important to understand the basics of the physics of vortices moving at high frequency, as well as to understand what information the experiments can yield (and what they can not). The aim of this brief review is to introduce the readers to some basic aspects of the physics of vortices under a microwave electromagnetic field, and to guide them to an understanding of the experiment, also by means of the illustration of some relevant results.
Sarter, Nadine
2008-06-01
The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
NASA Astrophysics Data System (ADS)
Morace, A.; Santos, J. J.; Bailly-Grandvaux, M.; Ehret, M.; Alpinaniz, J.; Brabetz, C.; Schaumann, G.; Volpe, L.
2017-02-01
Understanding the dynamics of rapidly varying electromagnetic fields in intense short pulse laser plasma interactions is of key importance to understand the mechanisms at the basis of a wide variety of physical processes, from high energy density physics and fusion science to the development of ultrafast laser plasma devices to control laser-generated particle beams. Target normal sheath accelerated (TNSA) proton radiography represents an ideal tool to diagnose ultrafast electromagnetic phenomena, providing 2D spatially and temporally resolved radiographs with temporal resolution varying from 2-3 ps to few tens of ps. In this work we introduce the proton radiography technique and its application to diagnose the spatial and temporal evolution of electromagnetic fields in laser-driven capacitor coil targets.
Measurements of the toroidal torque balance of error field penetration locked modes
Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...
2015-01-05
Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less
Machine learning for many-body physics: The case of the Anderson impurity model
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; ...
2014-10-31
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Machine learning for many-body physics: The case of the Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Assessing primary care data quality.
Lim, Yvonne Mei Fong; Yusof, Maryati; Sivasampu, Sheamini
2018-04-16
Purpose The purpose of this paper is to assess National Medical Care Survey data quality. Design/methodology/approach Data completeness and representativeness were computed for all observations while other data quality measures were assessed using a 10 per cent sample from the National Medical Care Survey database; i.e., 12,569 primary care records from 189 public and private practices were included in the analysis. Findings Data field completion ranged from 69 to 100 per cent. Error rates for data transfer from paper to web-based application varied between 0.5 and 6.1 per cent. Error rates arising from diagnosis and clinical process coding were higher than medication coding. Data fields that involved free text entry were more prone to errors than those involving selection from menus. The authors found that completeness, accuracy, coding reliability and representativeness were generally good, while data timeliness needs to be improved. Research limitations/implications Only data entered into a web-based application were examined. Data omissions and errors in the original questionnaires were not covered. Practical implications Results from this study provided informative and practicable approaches to improve primary health care data completeness and accuracy especially in developing nations where resources are limited. Originality/value Primary care data quality studies in developing nations are limited. Understanding errors and missing data enables researchers and health service administrators to prevent quality-related problems in primary care data.
Maltreated children's memory: accuracy, suggestibility, and psychopathology.
Eisen, Mitchell L; Goodman, Gail S; Qin, Jianjian; Davis, Suzanne; Crayton, John
2007-11-01
Memory, suggestibility, stress arousal, and trauma-related psychopathology were examined in 328 3- to 16-year-olds involved in forensic investigations of abuse and neglect. Children's memory and suggestibility were assessed for a medical examination and venipuncture. Being older and scoring higher in cognitive functioning were related to fewer inaccuracies. In addition, cortisol level and trauma symptoms in children who reported more dissociative tendencies were associated with increased memory error, whereas cortisol level and trauma symptoms were not associated with increased error for children who reported fewer dissociative tendencies. Sexual and/or physical abuse predicted greater accuracy. The study contributes important new information to scientific understanding of maltreatment, psychopathology, and eyewitness memory in children. (c) 2007 APA.
Description of a Quality Assurance Process for a Surface Wind Database in Eastern Canada
NASA Astrophysics Data System (ADS)
Lucio-Eceiza, E. E.; Gonzalez-Rouco, F. J.; Navarro, J.; Beltrami, H.; García-Bustamante, E.; Hidalgo; Jiménez, P. A.
2011-12-01
Meteorological data of good quality are important for understanding both global and regional climates. The data are subject to different types of measurement errors that can be roughly classified into three groups: random, systematic and rough errors. Random errors are unavoidable and inherent to the very nature of the measurements as instrumental responses to real physical phenomena, as they are an approximate representation of the reality. Systematic errors are produced by instrumental scale shifts and drifts or by some more or less persistent factors that are not taken into account (changes in the sensor, recalibrations or location displacements). Rough errors are associated with sensor malfunction or mismanagement arising during data processing, transmission, reception or storage. It is essential to develop procedures that allow to identify, and correct if possible, the errors in observed series, in order to improve the quality of the data sets and reach solid conclusions in the studies. This work summarizes the evaluation made to date of the quality assurance process of wind speed and direction data acquired over a wide area in Eastern Canada (including the provinces of Quebec, Prince Edward Island, New Brunswick, Nova Scotia, and Newfoundland and Labrador), a region of the adjacent maritime areas and a region of the north-eastern U.S. (Maine, New Hampshire, Massachusetts, New York and Vermont). The data set consists of 527 stations, it spans the period 1940-2009 and has been compiled from three different sources: a set of 344 land sites obtained from Environment Canada (1940-2009), a subset of 40 buoys distributed over the East Coast and the Canadian Great Lakes (1988-2008) provided by Fisheries and Oceans, and a subset of 143 land sites combining both eastern Canada and north-eastern U.S. provided by the National Center of Atmospheric Research (1975-2007). The data have been compiled and subsequently a set of quality assurance techniques have been applied to explore the detection and later treatment of errors within measurements. These techniques involve, among others, detection of manipulation errors, limit checks to avoid unrealistic records and temporal consistency checks to suppress abnormally low/high variations. There are other issues specifically related to the heterogeneous nature of this data set such as unit-conversion and changes in recording times or direction resolution over time. Ensuring the quality of wind observations is essential for the later analysis that will focus in exploring the wind field behaviour at the regional scale, with a special interest over the area of Nova Scotia. The wind behaviour will be examined attending to the specific features of the regional topography and to the influence of changes in the large scale atmospheric circulation. Subsequent steps will involve a simulation of the wind field with high spatial resolution using a mesoscale model (such as WRF) and its validation with the observational data set presented herein.
Domestication has not affected the understanding of means-end connections in dogs
Range, Friederike; Möslinger, Helene; Virányi, Zs
2015-01-01
Recent studies have revealed that dogs often perform well in cognitive tasks in the social domain, but rather poorly in the physical domain. This dichotomy has led to the hypothesis that the domestication process might have enhanced the social cognitive skills of dogs (Hare et al. in Science 298:1634–1636, 2002; Miklósi et al. in Curr Biol 13:763–766, 2003) but at the same time had a detrimental effect on their physical cognition (Frank in Z Tierpsychol 5:389–399, 1980). Despite the recent interest in dog cognition and especially the effects of domestication, the latter hypothesis has hardly been tested and we lack detailed knowledge of the physical understanding of wolves in comparison with dogs. Here, we set out to examine whether adult wolves and dogs rely on means-end connections using the string-pulling task, to test the prediction that wolves would perform better than dogs in such a task of physical cognition. We found that at the group level, dogs were more prone to commit the proximity error, while the wolves showed a stronger side bias. Neither wolves nor dogs showed an instantaneous understanding of means-end connection, but made different mistakes. Thus, the performance of the wolves and dogs in this string-pulling task did not confirm that domestication has affected the physical cognition of dogs. PMID:22460629
Domestication has not affected the understanding of means-end connections in dogs.
Range, Friederike; Möslinger, Helene; Virányi, Zs
2012-07-01
Recent studies have revealed that dogs often perform well in cognitive tasks in the social domain, but rather poorly in the physical domain. This dichotomy has led to the hypothesis that the domestication process might have enhanced the social cognitive skills of dogs (Hare et al. in Science 298:1634-1636, 2002; Miklósi et al. in Curr Biol 13:763-766, 2003) but at the same time had a detrimental effect on their physical cognition (Frank in Z Tierpsychol 5:389-399, 1980). Despite the recent interest in dog cognition and especially the effects of domestication, the latter hypothesis has hardly been tested and we lack detailed knowledge of the physical understanding of wolves in comparison with dogs. Here, we set out to examine whether adult wolves and dogs rely on means-end connections using the string-pulling task, to test the prediction that wolves would perform better than dogs in such a task of physical cognition. We found that at the group level, dogs were more prone to commit the proximity error, while the wolves showed a stronger side bias. Neither wolves nor dogs showed an instantaneous understanding of means-end connection, but made different mistakes. Thus, the performance of the wolves and dogs in this string-pulling task did not confirm that domestication has affected the physical cognition of dogs.
Pittman, Jeremy Joshua; Arnall, Daryl Brian; Interrante, Sindy M.; Moffet, Corey A.; Butler, Twain J.
2015-01-01
Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L.), bermudagrass [Cynodon dactylon (L.) Pers.], and wheat (Triticum aestivum L.) were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral) as compared to physical measurements (plate meter and meter stick) and the traditional harvest method (clipping). Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively), except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1) and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1). These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation. PMID:25635415
Potential for wind extraction from 4D-Var assimilation of aerosols and moisture
NASA Astrophysics Data System (ADS)
Zaplotnik, Žiga; Žagar, Nedjeljka
2017-04-01
We discuss the potential of the four-dimensional variational data assimilation (4D-Var) to retrieve the unobserved wind field from observations of atmospheric tracers and the mass field through internal model dynamics and the multivariate relationships in the background-error term for 4D-Var. The presence of non-linear moist dynamics makes the wind retrieval from tracers very difficult. On the other hand, it has been shown that moisture observations strongly influence both tropical and mid-latitude wind field in 4D-Var. We present an intermediate complexity model that describes nonlinear interactions between the wind, temperature, aerosols and moisture including their sinks and sources in the framework of the so-called first baroclinic mode atmosphere envisaged by A. Gill. Aerosol physical processes, which are included in the model, are the non-linear advection, diffusion and sources and sinks that exist as dry and wet deposition and diffusion. Precipitation is parametrized according to the Betts-Miller scheme. The control vector for 4D-Var includes aerosols, moisture and the three dynamical variables. The former is analysed univariately whereas wind field and mass field are analysed in a multivariate fashion taking into account quasi-geostrophic and unbalanced dynamics. The OSSE type of studies are performed for the tropical region to assess the ability of 4D-Var to extract wind-field information from the time series of observations of tracers as a function of the flow nonlinearity, the observations density and the length of the assimilation window (12 hours and 24 hours), in dry and moist environment. Results show that the 4D-Var assimilation of aerosols and temperature data is beneficial for the wind analysis with analysis errors strongly dependent on the moist processes and reliable background-error covariances.
Lionakis, M.S.; Hajishengallis, G.
2015-01-01
In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229
The Physical Elements of Onset of the Magnetospheric Substorm
NASA Technical Reports Server (NTRS)
Erickson, Gary M.
1997-01-01
During this reporting period effort continued in the areas: (1) understanding the mechanisms responsible for substorm onset, and (2) application of a fundamental description of field-aligned currents and parallel electric fields to the plasma-sheet boundary layer.
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.; Russell, Philip (Technical Monitor)
1994-01-01
A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. In addition, a comparison of observations of RE during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.; Russell, Philip B. (Technical Monitor)
1994-01-01
A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD ) brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. in addition, a comparison of observations of RH during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.
Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing
2016-01-01
The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.
NASA Astrophysics Data System (ADS)
Turner, Andrew; Bhat, Gs; Evans, Jonathan; Marsham, John; Martin, Gill; Parker, Douglas; Taylor, Chris; Bhattacharya, Bimal; Madan, Ranju; Mitra, Ashis; Mrudula, Gm; Muddu, Sekhar; Pattnaik, Sandeep; Rajagopal, En; Tripathi, Sachida
2015-04-01
The monsoon supplies the majority of water in South Asia, making understanding and predicting its rainfall vital for the growing population and economy. However, modelling and forecasting the monsoon from days to the season ahead is limited by large model errors that develop quickly, with significant inter-model differences pointing to errors in physical parametrizations such as convection, the boundary layer and land surface. These errors persist into climate projections and many of these errors persist even when increasing resolution. At the same time, a lack of detailed observations is preventing a more thorough understanding of monsoon circulation and its interaction with the land surface: a process governed by the boundary layer and convective cloud dynamics. The INCOMPASS project will support and develop modelling capability in Indo-UK monsoon research, including test development of a new Met Office Unified Model 100m-resolution domain over India. The first UK detachment of the FAAM research aircraft to India, in combination with an intensive ground-based observation campaign, will gather new observations of the surface, boundary layer structure and atmospheric profiles to go with detailed information on the timing of monsoon rainfall. Observations will be focused on transects in the northern plains of India (covering a range of surface types from irrigated to rain-fed agriculture, and wet to dry climatic zones) and across the Western Ghats and rain shadow in southern India (including transitions from land to ocean and across orography). A pilot observational campaign is planned for summer 2015, with the main field campaign to take place during spring/summer 2016. This project will advance our ability to forecast the monsoon, through a programme of measurements and modelling that aims to capture the key surface-atmosphere feedback processes in models. The observational analysis will allow a unique and unprecedented characterization of monsoon processes that will feed directly into model development at the UK Met Office and Indian NCMRWF, through model evaluation at a range of scales and leading to model improvement by working directly with parametrization developers. The project will institute a new long-term series of measurements of land surface fluxes, a particularly unconstrained observation for India, through eddy covariance flux towers. Combined with detailed land surface modelling using the Joint UK Land Environment Simulator (JULES) model, this will allow testing of land surface initialization in monsoon forecasts and improved land-atmosphere coupling.
Nederlands Français Search Search Advanced Search Find results with: error div Add another field Search by date Search by date: from after before on from: mm/dd/yyyy to to: mm/dd/yyyy Search Clear All Searching physical science laboratories. Instant Search Search PHOTOS Search Photos Search NIST PUBLICATIONS Search
Change in peripheral refraction and curvature of field of the human eye with accommodation
NASA Astrophysics Data System (ADS)
Ho, Arthur; Zimmermann, Frederik; Whatham, Andrew; Martinez, Aldo; Delgado, Stephanie; Lazon de la Jara, Percy; Sankaridurg, Padmaja
2009-02-01
Recent research showed that the peripheral refractive state is a sufficient stimulus for myopia progression. This finding led to the suggestion that devices that control peripheral refraction may be efficacious in controlling myopia progression. This study aims to understand whether the optical effect of such devices may be affected by near focus. In particular, we seek to understand the influence of accommodation on peripheral refraction and curvature of field of the eye. Refraction was measured in twenty young subjects using an autorefractor at 0° (i.e. along visual axis), and 20°, 30° and 40° field angles both nasal and temporal to the visual axis. All measurements were conducted at 2.5 m, 40 cm and 30 cm viewing distances. Refractive errors were corrected using a soft contact lens during all measurements. As field angle increased, refraction became less hyperopic. Peripheral refraction also became less hyperopic at nearer viewing distances (i.e. with increasing accommodation). Astigmatism (J180) increased with field angle as well as with accommodation. Adopting a third-order aberration theory approach, the position of the Petzval surface relative to the retinal surface was estimated by considering the relative peripheral refractive error (RPRE) and J180 terms of peripheral refraction. Results for the estimated dioptric position of the Petzval surface relative to the retina showed substantial asymmetry. While temporal field tended to agree with theoretical predictions, nasal response departed dramatically from the model eye predictions. With increasing accommodation, peripheral refraction becomes less hyperopic while the Petzval surface showed asymmetry in its change in position. The change in the optical components (i.e. cornea and/or lens as opposed to retinal shape or position) is implicated as at least one of the contributors of this shift in peripheral refraction during accommodation.
More on Systematic Error in a Boyle's Law Experiment
ERIC Educational Resources Information Center
McCall, Richard P.
2012-01-01
A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.
NASA Astrophysics Data System (ADS)
Lavigne, T.; Liu, C.
2017-12-01
Previous studies focusing on the comparison of the measured electric field to the physical properties of global electrified clouds have been conducted almost exclusively in the Southern Hemisphere. The One-Year Electric Field Study-North Slope of Alaska (OYES-NSA) aims to establish a long-running collection of this valuable electric field data in the Northern Hemisphere. Presented here is the six-month preliminary data and results of the OYES-NSA Atmospheric Radiation Mission (ARM) field campaign. The local electric field measured in Barrow, Alaska using two CS110 reciprocating shutter field meters, has been compared to simultaneous measurements from the ARM Ka-Band zenith radar, to better understand the influence and contribution of different types of clouds on the local electric field. The fair-weather electric field measured in Barrow has also been analyzed and compared to the climatology of electric field at Vostok Station, Antarctica. The combination of the electric field dataset in the Northern Hemisphere, alongside the local Ka cloud radar, global Precipitation Feature (PF) database, and quasi-global lightning activity (55oN-55oS), allows for advances in the physical understanding of the local electric field, as well as the Global Electric Circuit (GEC).
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
NASA Astrophysics Data System (ADS)
Ries, Paul A.
2012-05-01
The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.
Hessian matrix approach for determining error field sensitivity to coil deviations
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi
2018-05-01
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.
Focusing cosmic telescopes: systematics of strong lens modeling
NASA Astrophysics Data System (ADS)
Johnson, Traci Lin; Sharon, Keren q.
2018-01-01
The use of strong gravitational lensing by galaxy clusters has become a popular method for studying the high redshift universe. While diverse in computational methods, lens modeling techniques have grasped the means for determining statistical errors on cluster masses and magnifications. However, the systematic errors have yet to be quantified, arising from the number of constraints, availablity of spectroscopic redshifts, and various types of image configurations. I will be presenting my dissertation work on quantifying systematic errors in parametric strong lensing techniques. I have participated in the Hubble Frontier Fields lens model comparison project, using simulated clusters to compare the accuracy of various modeling techniques. I have extended this project to understanding how changing the quantity of constraints affects the mass and magnification. I will also present my recent work extending these studies to clusters in the Outer Rim Simulation. These clusters are typical of the clusters found in wide-field surveys, in mass and lensing cross-section. These clusters have fewer constraints than the HFF clusters and thus, are more susceptible to systematic errors. With the wealth of strong lensing clusters discovered in surveys such as SDSS, SPT, DES, and in the future, LSST, this work will be influential in guiding the lens modeling efforts and follow-up spectroscopic campaigns.
Prediction of final error level in learning and repetitive control
NASA Astrophysics Data System (ADS)
Levoci, Peter A.
Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
Functional Basis for Efficient Physical Layer Classical Control in Quantum Processors
NASA Astrophysics Data System (ADS)
Ball, Harrison; Nguyen, Trung; Leong, Philip H. W.; Biercuk, Michael J.
2016-12-01
The rapid progress seen in the development of quantum-coherent devices for information processing has motivated serious consideration of quantum computer architecture and organization. One topic which remains open for investigation and optimization relates to the design of the classical-quantum interface, where control operations on individual qubits are applied according to higher-level algorithms; accommodating competing demands on performance and scalability remains a major outstanding challenge. In this work, we present a resource-efficient, scalable framework for the implementation of embedded physical layer classical controllers for quantum-information systems. Design drivers and key functionalities are introduced, leading to the selection of Walsh functions as an effective functional basis for both programing and controller hardware implementation. This approach leverages the simplicity of real-time Walsh-function generation in classical digital hardware, and the fact that a wide variety of physical layer controls, such as dynamic error suppression, are known to fall within the Walsh family. We experimentally implement a real-time field-programmable-gate-array-based Walsh controller producing Walsh timing signals and Walsh-synthesized analog waveforms appropriate for critical tasks in error-resistant quantum control and noise characterization. These demonstrations represent the first step towards a unified framework for the realization of physical layer controls compatible with large-scale quantum-information processing.
Reflections on the nature of the concepts of field in physics
NASA Astrophysics Data System (ADS)
Pombo, C.
2012-12-01
This paper is a short introduction on the analysis of the concepts of field in physics, showing their different natures. It comprises a study on the development of observers based on observational realism, a physical epistemology in development, on the basis of analytical psychology. This epistemology incorporates and justify the proposition of R. Carnap, of separating observational and theoretical domains of a theory, and gives a criterion for this separation. The basis of three theories are discussed, where concepts of field emerge. We discuss the different origins and meanings of these fields, from an epistemological point of view, in their respective theories. The aim of this paper is to form a basis of discussion to be applied in the analysis of other theories where concepts of field are present, to reach a better understanding of the contemporary programs of unification. We would like to clarify if these programs are intended for unification of fields as elements of the physical reality, fields as explanations for the observations, unification of their theories, or other possible cases.
Wright, Ann; Provost, Joseph; Roecklein-Canfield, Jennifer A; Bell, Ellis
2013-01-01
Over the past two years, through an NSF RCN UBE grant, the ASBMB has held regional workshops for faculty members from around the country. The workshops have focused on developing lists of Core Principles or Foundational Concepts in Biochemistry and Molecular Biology, a list of foundational skills, and foundational concepts from Physics, Chemistry, and Mathematics that all Biochemistry or Molecular Biology majors must understand to complete their major coursework. The allied fields working group created a survey to validate foundational concepts from Physics, Chemistry, and Mathematics identified from participant feedback at various workshops. One-hundred twenty participants responded to the survey and 68% of the respondents answered yes to the question: "We have identified the following as the core concepts and underlying theories from Physics, Chemistry, and Mathematics that Biochemistry majors or Molecular Biology majors need to understand after they complete their major courses: 1) mechanical concepts from Physics, 2) energy and thermodynamic concepts from Physics, 3) critical concepts of structure from chemistry, 4) critical concepts of reactions from Chemistry, and 5) essential Mathematics. In your opinion, is the above list complete?" Respondents also delineated subcategories they felt should be included in these broad categories. From the results of the survey and this analysis the allied fields working group constructed a consensus list of allied fields concepts, which will help inform Biochemistry and Molecular Biology educators when considering the ASBMB recommended curriculum for Biochemistry or Molecular Biology majors and in the development of appropriate assessment tools to gauge student understanding of how these concepts relate to biochemistry and molecular biology. © 2013 by The International Union of Biochemistry and Molecular Biology.
ERIC Educational Resources Information Center
Ogunleye, Ayodele O.
2009-01-01
In recent times, science education researchers have identified a lot of instruments for evaluating conceptual understanding as well as students' attitudes and beliefs about physics; unfortunately however, there are no broad based evaluation instruments in the field of problem-solving in physics. This missing tool is an indication of the complexity…
Conceptual Developments of 20th Century Field Theories
NASA Astrophysics Data System (ADS)
Cao, Tian Yu
1998-06-01
This volume provides a broad synthesis of conceptual developments of twentieth century field theories, from the general theory of relativity to quantum field theory and gauge theory. The book traces the foundations and evolution of these theories within a historio-critical context. Theoretical physicists and students of theoretical physics will find this a valuable account of the foundational problems of their discipline that will help them understand the internal logic and dynamics of theoretical physics. It will also provide professional historians and philosophers of science, particularly philosophers of physics, with a conceptual basis for further historical, cultural and sociological analysis of the theories discussed. Finally, the scientifically qualified general reader will find in this book a deeper analysis of contemporary conceptions of the physical world than can be found in popular accounts of the subject.
Conceptual Developments of 20th Century Field Theories
NASA Astrophysics Data System (ADS)
Cao, Tian Yu
1997-02-01
This volume provides a broad synthesis of conceptual developments of twentieth century field theories, from the general theory of relativity to quantum field theory and gauge theory. The book traces the foundations and evolution of these theories within a historio-critical context. Theoretical physicists and students of theoretical physics will find this a valuable account of the foundational problems of their discipline that will help them understand the internal logic and dynamics of theoretical physics. It will also provide professional historians and philosophers of science, particularly philosophers of physics, with a conceptual basis for further historical, cultural and sociological analysis of the theories discussed. Finally, the scientifically qualified general reader will find in this book a deeper analysis of contemporary conceptions of the physical world than can be found in popular accounts of the subject.
Design Considerations for High Energy Electron -- Positron Storage Rings
DOE R&D Accomplishments Database
Richter, B.
1966-11-01
High energy electron-positron storage rings give a way of making a new attack on the most important problems of elementary particle physics. All of us who have worked in the storage ring field designing, building, or using storage rings know this. The importance of that part of storage ring work concerning tests of quantum electrodynamics and mu meson physics is also generally appreciated by the larger physics community. However, I do not think that most of the physicists working tin the elementary particle physics field realize the importance of the contribution that storage ring experiments can make to our understanding of the strongly interacting particles. I would therefore like to spend the next few minutes discussing the sort of things that one can do with storage rings in the strongly interacting particle field.
The approaches to the didactics of physics in the Czech Republic - Historical development
NASA Astrophysics Data System (ADS)
Žák, Vojtěch
2017-01-01
The aim of this paper is to describe approaches to the didactics of physics which have appeared in the Czech Republic during its development and to discuss mainly their relationships with other fields. It is potentially beneficial to the understanding of the current situation of the Czech didactics of physics and to the prognosis of its future development. The main part of the article includes a description of the particular approaches of the Czech didactics of physics, such as the methodological, application, integration and communication approaches described in chronological order. Special attention is paid to the relationships of the didactics of physics and physics itself, pedagogy and other fields. It is obvious that the methodological approach is narrowly connected to physics, while the application approach comes essentially from pedagogy. The integration approach seeks the utilization of other scientific fields to develop the didactics of physics. It was revealed that the most elaborate is the communication approach. This approach belongs to the concepts that have influenced the current didactical thinking in the Czech Republic to a high extent in other fields as well (including within the didactics of socio-humanist fields). In spite of the importance of the communication approach, it should be admitted that the other approaches are, to a certain extent, employed as well and co-exist.
Analysis of Errors Committed by Physics Students in Secondary Schools in Ilorin Metropolis, Nigeria
ERIC Educational Resources Information Center
Omosewo, Esther Ore; Akanbi, Abdulrasaq Oladimeji
2013-01-01
The study attempt to find out the types of error committed and influence of gender on the type of error committed by senior secondary school physics students in metropolis. Six (6) schools were purposively chosen for the study. One hundred and fifty five students' scripts were randomly sampled for the study. Joint Mock physics essay questions…
Data Analysis and Synthesis for the ONR Undersea Sand Dunes in the South China Sea Field Experiments
2015-09-30
understanding of coastal oceanography by means of applying simple dynamical theories to high-quality observations obtained in the field. My primary...area of expertise is physical oceanography , but I also enjoy collaborating with biological, chemical, acoustical, and optical oceanographers to work... oceanography , and impact of the bottom configuration and physical oceanography on acoustic propagation. • The space and time scales of the dune
Health: The No-Man's-Land Between Physics and Biology.
Mansfield, Peter J
2015-10-01
Health as a positive attribute is poorly understood because understanding requires concepts from physics, of which physicians and other life scientists have a very poor grasp. This paper reviews the physics that bears on biology, in particular complex quaternions and scalar fields, relates these to the morphogenetic fields proposed by biologists, and defines health as an attribute of living action within these fields. The distinction of quality, as juxtaposed with quantity, proves essential. Its basic properties are set out, but a science and mathematics of quality are awaited. The implications of this model are discussed, particularly as proper health enhancement could set a natural limit to demand for, and therefore the cost of, medical services.
NASA Astrophysics Data System (ADS)
de Assis, Thiago A.; Dall’Agnol, Fernando F.
2018-05-01
Numerical simulations are important when assessing the many characteristics of field emission related phenomena. In small simulation domains, the electrostatic effect from the boundaries is known to influence the calculated apex field enhancement factor (FEF) of the emitter, but no established dependence has been reported at present. In this work, we report the dependence of the lateral size, L, and the height, H, of the simulation domain on the apex-FEF of a single conducting ellipsoidal emitter. Firstly, we analyze the error, ε, in the calculation of the apex-FEF as a function of H and L. Importantly, our results show that the effects of H and L on ε are scale invariant, allowing one to predict ε for ratios L/h and H/h, where h is the height of the emitter. Next, we analyze the fractional change of the apex-FEF, δ, from a single emitter, , and a pair, . We show that small relative errors in (i.e. ), due to the finite domain size, are sufficient to alter the functional dependence , where c is the distance from the emitters in the pair. We show that obeys a recently proposed power law decay (Forbes 2016 J. Appl. Phys. 120 054302), at sufficiently large distances in the limit of infinite domain size (, say), which is not observed when using a long time established exponential decay (Bonard et al 2001 Adv. Mater. 13 184) or a more sophisticated fitting formula proposed recently by Harris et al (2015 AIP Adv. 5 087182). We show that the inverse-third power law functional dependence is respected for various systems like infinity arrays and small clusters of emitters with different shapes. Thus, , with m = 3, is suggested to be a universal signature of the charge-blunting effect in small clusters or arrays, at sufficient large distances between emitters with any shape. These results improve the physical understanding of the field electron emission theory to accurately characterize emitters in small clusters or arrays.
Quantifying measurement uncertainty and spatial variability in the context of model evaluation
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.
2017-12-01
In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.
A general overview of the history of soil science
NASA Astrophysics Data System (ADS)
Brevik, Eric C.; Cerdà, Artemi
2017-04-01
Human knowledge of soil has come a long way since agriculture began about 9000 BCE, when finding the best soils to grow crops in was largely based on a trial and error approach. Many innovations to manage and conserve soil, such as the plow, irrigation techniques, terraces, contour tillage, and even the engineering of artificial soils, were developed between 9000 BCE and 1500 CE. Scientific methods began to be employed in the study of soils during the Renaissance and many famous scientists addressed soil issues, but soil science did not evolve into an independent scientific field of study until the 1880s. In the early days of the study of soil as a science, soil survey activities provided one of the major means of advancing the field. As the 20th century progressed, advances in soil biology, chemistry, genesis, management, and physics allowed the use of soil information to expand beyond agriculture to environmental issues, human health, land use planning, and many other areas. The development of soil history as a subfield of the discipline in the latter part of the 20th century has promise to help advance soil science through a better understanding of how we have arrived at the major theories that shape the modern study of soil science.
NASA Astrophysics Data System (ADS)
Altan, O.; Kemper, G.
2012-07-01
The GIS based analysis of the land use change of Istanbul delivers a huge and comprehensive database that can be used for further analysis. Trend analysis and scenarios enable a view to the future that highlights the needs for a proper planning. Also the understanding via comparison to other cities assists in order not to copy errors from other cities. GIS in combination with ancillary data open a wide field for managing the future of Istanbul.
New ideas about the physics of earthquakes
NASA Astrophysics Data System (ADS)
Rundle, John B.; Klein, William
1995-07-01
It may be no exaggeration to claim that this most recent quaddrenium has seen more controversy and thus more progress in understanding the physics of earthquakes than any in recent memory. The most interesting development has clearly been the emergence of a large community of condensed matter physicists around the world who have begun working on the problem of earthquake physics. These scientists bring to the study of earthquakes an entirely new viewpoint, grounded in the physics of nucleation and critical phenomena in thermal, magnetic, and other systems. Moreover, a surprising technology transfer from geophysics to other fields has been made possible by the realization that models originally proposed to explain self-organization in earthquakes can also be used to explain similar processes in problems as disparate as brain dynamics in neurobiology (Hopfield, 1994), and charge density waves in solids (Brown and Gruner, 1994). An entirely new sub-discipline is emerging that is focused around the development and analysis of large scale numerical simulations of the dynamics of faults. At the same time, intriguing new laboratory and field data, together with insightful physical reasoning, has led to significant advances in our understanding of earthquake source physics. As a consequence, we can anticipate substantial improvement in our ability to understand the nature of earthquake occurrence. Moreover, while much research in the area of earthquake physics is fundamental in character, the results have many potential applications (Cornell et al., 1993) in the areas of earthquake risk and hazard analysis, and seismic zonation.
Unusual Applications of Ultrasound in Industry
NASA Astrophysics Data System (ADS)
Keilman, George
The application of physical acoustics in industry has been accelerated by increased understanding of the physics of industrial processes, coupled with rapid advancements in transducers, microelectronics, data acquisition, signal processing, and related software fields. This has led to some unusual applications of ultrasound to improve industrial processes.
NASA Astrophysics Data System (ADS)
Hasegawa, Makoto
A project team "Rika-Kobo" organized by university students has actively performed various science education activities at primary and secondary schools and other educational facilities as well as in science events in local areas. The activities of this student project team are related to various fields of physics and sciences. In order to provide more attractive activities, the student members prepare original experiment tools and easily-understandable presentation and explanation. Through such activities, the members can have opportunities of obtaining new knowledge and refreshing their already-obtained understandings in related fields of physics and sciences. They can also have chances of improving their skills and abilities such as presentation, problem-finding and solving, which are useful for realizing their career development. The activities of the student project team have been also welcomed by children, parents, teachers and other people in local areas because the activities provide them with opportunities of knowing and learning new knowledge in physics and sciences.
NASA Technical Reports Server (NTRS)
Salas, Manuel D.
2007-01-01
The research program of the aerodynamics, aerothermodynamics and plasmadynamics discipline of NASA's Hypersonic Project is reviewed. Details are provided for each of its three components: 1) development of physics-based models of non-equilibrium chemistry, surface catalytic effects, turbulence, transition and radiation; 2) development of advanced simulation tools to enable increased spatial and time accuracy, increased geometrical complexity, grid adaptation, increased physical-processes complexity, uncertainty quantification and error control; and 3) establishment of experimental databases from ground and flight experiments to develop better understanding of high-speed flows and to provide data to validate and guide the development of simulation tools.
NASA Astrophysics Data System (ADS)
Dare, Emily Anna
According to the American Physical Society, women accounted for only 20% of bachelor's degrees in the fields of physics and engineering in 2010. This low percentage is likely related to young girls' K-12 education experiences, particularly their experiences prior to high school, during which time young women's perceptions of Science, Technology, Engineering, and Math (STEM) and STEM careers are formed (Catsambis, 1995; Maltese & Tai, 2011; National Research Council, 2012; Sadler, Sonnert, Hazari, & Tai, 2012; Tai, Liu, Maltese, & Fan, 2006; Scantlebury, 2014; Sikora & Pokropek, 2012). There are no significant gender differences in academic achievement in middle school, yet young women have less positive attitudes towards careers in science than their male peers (Catsambis, 1995; Scantlebury, 2014). This suggests that the low female representation in certain STEM fields is a result of not their abilities, but their perceptions; for fields like physics where negative perceptions persist (Haussler & Hoffman, 2002; Labudde, Herzog, Neuenschander, Violi, & Gerber, 2000), it is clear that middle school is a critical time to intervene. This study examines the perceptions of 6th grade middle school students regarding physics and physics-related careers. A theoretical framework based on the literature of girl-friendly and integrated STEM strategies (Baker & Leary, 1995; Halpern et al., 2007; Haussler & Hoffman, 2000, 2002; Labudde et al., 2000; Moore et al., 2014b; Newbill & Cennamo, 2008; Rosser, 2000; Yanowitz, 2004) guided this work to understand how these instructional strategies may influence student's perceptions of physics for both girls and boys. The overarching goal of this work was to understand similarities and differences between girls' and boys' perceptions about physics and physics-related careers. This convergent parallel mixed-methods study uses a series of student surveys and focus group interviews to identify and understand these similarities and differences. Classroom observations also helped to identify what instructional strategies teachers used that influence student perceptions. Findings from this study indicate very few differences between the perceptions of physics and physics-related careers for 6th grade girls and boys. However, the differences that exist, though subtle, may indicate how K-12 science instruction could more positively influence girls' perceptions. For instance, while girls are just as interested in science class as their male counterparts, they are more motivated when a social context is included; this has implications for how they view physics-related careers. The findings of this study shed light on not only why fewer females pursue careers in physics, but also how K-12 science reform efforts might help to increase these numbers.
Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo
2018-01-01
This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.
Thompson, William R.; Scott, Alexander; Loghmani, M. Terry; Ward, Samuel R.
2016-01-01
Achieving functional restoration of diseased or injured tissues is the ultimate goal of both regenerative medicine approaches and physical therapy interventions. Proper integration and healing of the surrogate cells, tissues, or organs introduced using regenerative medicine techniques are often dependent on the co-introduction of therapeutic physical stimuli. Thus, regenerative rehabilitation represents a collaborative approach whereby rehabilitation specialists, basic scientists, physicians, and surgeons work closely to enhance tissue restoration by creating tailored rehabilitation treatments. One of the primary treatment regimens that physical therapists use to promote tissue healing is the introduction of mechanical forces, or mechanotherapies. These mechanotherapies in regenerative rehabilitation activate specific biological responses in musculoskeletal tissues to enhance the integration, healing, and restorative capacity of implanted cells, tissues, or synthetic scaffolds. To become future leaders in the field of regenerative rehabilitation, physical therapists must understand the principles of mechanobiology and how mechanotherapies augment tissue responses. This perspective article provides an overview of mechanotherapy and discusses how mechanical signals are transmitted at the tissue, cellular, and molecular levels. The synergistic effects of physical interventions and pharmacological agents also are discussed. The goals are to highlight the critical importance of mechanical signals on biological tissue healing and to emphasize the need for collaboration within the field of regenerative rehabilitation. As this field continues to emerge, physical therapists are poised to provide a critical contribution by integrating mechanotherapies with regenerative medicine to restore musculoskeletal function. PMID:26637643
Tutorial: Physics and modeling of Hall thrusters
NASA Astrophysics Data System (ADS)
Boeuf, Jean-Pierre
2017-01-01
Hall thrusters are very efficient and competitive electric propulsion devices for satellites and are currently in use in a number of telecommunications and government spacecraft. Their power spans from 100 W to 20 kW, with thrust between a few mN and 1 N and specific impulse values between 1000 and 3000 s. The basic idea of Hall thrusters consists in generating a large local electric field in a plasma by using a transverse magnetic field to reduce the electron conductivity. This electric field can extract positive ions from the plasma and accelerate them to high velocity without extracting grids, providing the thrust. These principles are simple in appearance but the physics of Hall thrusters is very intricate and non-linear because of the complex electron transport across the magnetic field and its coupling with the electric field and the neutral atom density. This paper describes the basic physics of Hall thrusters and gives a (non-exhaustive) summary of the research efforts that have been devoted to the modelling and understanding of these devices in the last 20 years. Although the predictive capabilities of the models are still not sufficient for a full computer aided design of Hall thrusters, significant progress has been made in the qualitative and quantitative understanding of these devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punjabi, Alkesh; Ali, Halima
A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates ({psi},{theta}) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. {psi} is the toroidal magnetic flux and {theta} is the poloidal angle. Natural canonical coordinates ({psi},{theta},{phi}) can be transformed to physical position (R,Z,{phi}) using a canonical transformation. (R,Z,{phi}) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonicalmore » coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.« less
NASA Astrophysics Data System (ADS)
Punjabi, Alkesh; Ali, Halima
2008-12-01
A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates (ψ,θ) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. ψ is the toroidal magnetic flux and θ is the poloidal angle. Natural canonical coordinates (ψ,θ,φ) can be transformed to physical position (R,Z,φ) using a canonical transformation. (R,Z,φ) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonical coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.
A New Principle in Physiscs: the Principle "Finiteness", and Some Consequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham Sternlieb
2010-06-25
In this paper I propose a new principle in physics: the principle of "finiteness". It stems from the definition of physics as a science that deals (among other things) with measurable dimensional physical quantities. Since measurement results, including their errors, are always finite, the principle of finiteness postulates that the mathematical formulation of "legitimate" laws of physics should prevent exactly zero or infinite solutions. Some consequences of the principle of finiteness are discussed, in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The consequences are derived independently of any other theory ormore » principle in physics. I propose "finiteness" as a postulate (like the constancy of the speed of light in vacuum, "c"), as opposed to a notion whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories, or principles.« less
Phobos laser ranging: Numerical Geodesy experiments for Martian system science
NASA Astrophysics Data System (ADS)
Dirkx, D.; Vermeersen, L. L. A.; Noomen, R.; Visser, P. N. A. M.
2014-09-01
Laser ranging is emerging as a technology for use over (inter)planetary distances, having the advantage of high (mm-cm) precision and accuracy and low mass and power consumption. We have performed numerical simulations to assess the science return in terms of geodetic observables of a hypothetical Phobos lander performing active two-way laser ranging with Earth-based stations. We focus our analysis on the estimation of Phobos and Mars gravitational, tidal and rotational parameters. We explicitly include systematic error sources in addition to uncorrelated random observation errors. This is achieved through the use of consider covariance parameters, specifically the ground station position and observation biases. Uncertainties for the consider parameters are set at 5 mm and at 1 mm for the Gaussian uncorrelated observation noise (for an observation integration time of 60 s). We perform the analysis for a mission duration up to 5 years. It is shown that a Phobos Laser Ranging (PLR) can contribute to a better understanding of the Martian system, opening the possibility for improved determination of a variety of physical parameters of Mars and Phobos. The simulations show that the mission concept is especially suited for estimating Mars tidal deformation parameters, estimating degree 2 Love numbers with absolute uncertainties at the 10-2 to 10-4 level after 1 and 4 years, respectively and providing separate estimates for the Martian quality factors at Sun and Phobos-forced frequencies. The estimation of Phobos libration amplitudes and gravity field coefficients provides an estimate of Phobos' relative equatorial and polar moments of inertia with an absolute uncertainty of 10-4 and 10-7, respectively, after 1 year. The observation of Phobos tidal deformation will be able to differentiate between a rubble pile and monolithic interior within 2 years. For all parameters, systematic errors have a much stronger influence (per unit uncertainty) than the uncorrelated Gaussian observation noise. This indicates the need for the inclusion of systematic errors in simulation studies and special attention to the mitigation of these errors in mission and system design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
John Bahcall and the Solar Neutrino Problem
NASA Astrophysics Data System (ADS)
Bahcall, Neta
2016-03-01
``I feel like dancing'', cheered John Bahcall upon hearing the exciting news from the SNO experiment in 2001. The results confirmed, with remarkable accuracy, John's 40-year effort to predict the rate of neutrinos from the Sun based on sophisticated Solar models. What began in 1962 by John Bahcall and Ray Davis as a pioneering project to test and confirm how the Sun shines, quickly turned into a four-decade-long mystery of the `Solar Neutrino Problem': John's models predicted a higher rate of neutrinos than detected by Davis and follow-up experiments. Was the theory of the Sun wrong? Were John's calculations in error? Were the neutrino experiments wrong? John worked tirelessly to understand the physics behind the Solar Neutrino Problem; he led the efforts to greatly increase the accurately of the solar model, to understand its seismology and neutrino fluxes, to use the neutrino fluxes as a test for new physics, and to advocate for important new experiments. It slowly became clear that none of the then discussed possibilities --- error in the Solar model or neutrino experiments --- was the culprit. The SNO results revealed that John's calculations, and hence the theory of the Solar model, have been correct all along. Comparison of the data with John's theory demanded new physics --- neutrino oscillations. The Solar Neutrino saga is one of the most amazing scientific stories of the century: exploring a simple question of `How the Sun Shines?' led to the discovery of new physics. John's theoretical calculations are an integral part of this journey; they provide the foundation for the Solar Neutrino Problem, for confirming how the Sun shines, and for the need of neutrino oscillations. His tenacious persistence, dedication, enthusiasm and love for the project, and his leadership and advocacy of neutrino physics over many decades are a remarkable story of scientific triumph. I know John is smiling today.
Miller, Chad S
2013-01-01
Nearly half of medical errors can be attributed to an error of clinical reasoning or decision making. It is estimated that the correct diagnosis is missed or delayed in between 5% and 14% of acute hospital admissions. Through understanding why and how physicians make these errors, it is hoped that strategies can be developed to decrease the number of these errors. In the present case, a patient presented with dyspnea, gastrointestinal symptoms and weight loss; the diagnosis was initially missed when the treating physicians took mental short cuts and used heuristics as in this case. Heuristics have an inherent bias that can lead to faulty reasoning or conclusions, especially in complex or difficult cases. Affective bias, which is the overinvolvement of emotion in clinical decision making, limited the available information for diagnosis because of the hesitancy to acquire a full history and perform a complete physical examination in this patient. Zebra retreat, another type of bias, is when a rare diagnosis figures prominently on the differential diagnosis but the physician retreats for various reasons. Zebra retreat also factored in the delayed diagnosis. Through the description of these clinical reasoning errors in an actual case, it is hoped that future errors can be prevented or inspiration for additional research in this area will develop.
NASA Astrophysics Data System (ADS)
Crouch, Catherine H.; Heller, Kenneth
2014-05-01
We describe restructuring the introductory physics for life science students (IPLS) course to better support these students in using physics to understand their chosen fields. Our courses teach physics using biologically rich contexts. Specifically, we use examples in which fundamental physics contributes significantly to understanding a biological system to make explicit the value of physics to the life sciences. This requires selecting the course content to reflect the topics most relevant to biology while maintaining the fundamental disciplinary structure of physics. In addition to stressing the importance of the fundamental principles of physics, an important goal is developing students' quantitative and problem solving skills. Our guiding pedagogical framework is the cognitive apprenticeship model, in which learning occurs most effectively when students can articulate why what they are learning matters to them. In this article, we describe our courses, summarize initial assessment data, and identify needs for future research.
Hessian matrix approach for determining error field sensitivity to coil deviations.
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...
2018-03-15
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
Hessian matrix approach for determining error field sensitivity to coil deviations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
η and η' mesons from lattice QCD.
Christ, N H; Dawson, C; Izubuchi, T; Jung, C; Liu, Q; Mawhinney, R D; Sachrajda, C T; Soni, A; Zhou, R
2010-12-10
The large mass of the ninth pseudoscalar meson, the η', is believed to arise from the combined effects of the axial anomaly and the gauge field topology present in QCD. We report a realistic, 2+1-flavor, lattice QCD calculation of the η and η' masses and mixing which confirms this picture. The physical eigenstates show small octet-singlet mixing with a mixing angle of θ=-14.1(2.8)°. Extrapolation to the physical light quark mass gives, with statistical errors only, mη=573(6) MeV and mη'=947(142) MeV, consistent with the experimental values of 548 and 958 MeV.
NASA Astrophysics Data System (ADS)
Kim, D.; Shin, S.; Ha, J.; Lee, D.; Lim, Y.; Chung, W.
2017-12-01
Seismic physical modeling is a laboratory-scale experiment that deals with the actual and physical phenomena that may occur in the field. In seismic physical modeling, field conditions are downscaled and used. For this reason, even a small error may lead to a big error in an actual field. Accordingly, the positions of the source and the receiver must be precisely controlled in scale modeling. In this study, we have developed a seismic physical modeling system capable of precisely controlling the 3-axis position. For automatic and precise position control of an ultrasonic transducer(source and receiver) in the directions of the three axes(x, y, and z), a motor was mounted on each of the three axes. The motor can automatically and precisely control the positions with positional precision of 2''; for the x and y axes and 0.05 mm for the z axis. As it can automatically and precisely control the positions in the directions of the three axes, it has an advantage in that simulations can be carried out using the latest exploration techniques, such as OBS and Broadband Seismic. For the signal generation section, a waveform generator that can produce a maximum of two sources was used, and for the data acquisition section, which receives and stores reflected signals, an A/D converter that can receive a maximum of four signals was used. As multiple sources and receivers could be used at the same time, the system was set up in such a way that diverse exploration methods, such as single channel, multichannel, and 3-D exploration, could be realized. A computer control program based on LabVIEW was created, so that it could control the position of the transducer, determine the data acquisition parameters, and check the exploration data and progress in real time. A marine environment was simulated using a water tank 1 m wide, 1 m long, and 0.9 m high. To evaluate the performance and applicability of the seismic physical modeling system developed in this study, single channel and multichannel explorations were carried out in the marine environment and the accuracy of the modeling system was verified by comparatively analyzing the exploration data and the numerical modeling data acquired.
What do IPAQ questions mean to older adults? Lessons from cognitive interviews
2010-01-01
Background Most questionnaires used for physical activity (PA) surveillance have been developed for adults aged ≤65 years. Given the health benefits of PA for older adults and the aging of the population, it is important to include adults aged 65+ years in PA surveillance. However, few studies have examined how well older adults understand PA surveillance questionnaires. This study aimed to document older adults' understanding of questions from the International PA Questionnaire (IPAQ), which is used worldwide for PA surveillance. Methods Participants were 41 community-dwelling adults aged 65-89 years. They each completed IPAQ in a face-to-face semi-structured interview, using the "think-aloud" method, in which they expressed their thoughts out loud as they answered IPAQ questions. Interviews were transcribed and coded according to a three-stage model: understanding the intent of the question; performing the primary task (conducting the mental operations required to formulate a response); and response formatting (mapping the response into pre-specified response options). Results Most difficulties occurred during the understanding and performing the primary task stages. Errors included recalling PA in an "average" week, not in the previous 7 days; including PA lasting <10 minutes/session; reporting the same PA twice or thrice; and including the total time of an activity for which only a part of that time was at the intensity specified in the question. Participants were unclear what activities fitted within a question's scope and used a variety of strategies for determining the frequency and duration of their activities. Participants experienced more difficulties with the moderate-intensity PA and walking questions than with the vigorous-intensity PA questions. The sitting time question, particularly difficult for many participants, required the use of an answer strategy different from that used to answer questions about PA. Conclusions These findings indicate a need for caution in administering IPAQ to adults aged ≥65 years. Most errors resulted in over-reporting, although errors resulting in under-reporting were also noted. Given the nature of the errors made by participants, it is possible that similar errors occur when IPAQ is used in younger populations and that the errors identified could be minimized with small modifications to IPAQ. PMID:20459758
NASA Astrophysics Data System (ADS)
Dare, Emily A.; Roehrig, Gillian H.
2016-12-01
[This paper is part of the Focused Collection on Gender in Physics.] This study examined the perceptions of 6th grade middle school students regarding physics and physics-related careers. The overarching goal of this work was to understand similarities and differences between girls' and boys' perceptions surrounding physics and physics-related careers as part of a long-term effort to increase female interest and representation in this particular field of science. A theoretical framework based on the literature of girl-friendly and integrated STEM instructional strategies guided this work to understand how instructional strategies may influence and relate to students' perceptions. This convergent parallel mixed-methods study used a survey and focus group interviews to understand similarities and differences between girls' and boys' perceptions. Our findings indicate very few differences between girls and boys, but show that boys are more interested in the physics-related career of engineering. While girls are just as interested in science class as their male counterparts, they highly value the social aspect that often accompanies hands-on group activities. These findings shed light on how K-12 science reform efforts might help to increase the number of women pursuing careers related to physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt
2004-08-01
It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.
Hacking the quantum revolution: 1925-1975
NASA Astrophysics Data System (ADS)
Schweber, Silvan S.
2015-01-01
I argue that the quantum revolution should be seen as an Ian Hacking type of scientific revolution: a profound, longue durée, multidisciplinary process of transforming our understanding of physical nature, with deep-rooted social components from the start. The "revolution" exhibits a characteristic style of reasoning - the hierarchization of physical nature - and developed and uses a specific language - quantum field theory (QFT). It is by virtue of that language that the quantum theory has achieved some of its deepest insights into the description of the dynamics of the physical world. However, the meaning of what a quantum field theory is and what it describes has deeply altered, and one now speaks of "effective" quantum field theories. Interpreting all present day quantum field theories as but "effective" field theories sheds additional light on Phillip Anderson's assertion that "More is different". This important element is addressed in the last part of the paper.
NASA Astrophysics Data System (ADS)
Meier, Walter Neil
This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.
Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D
Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.
2015-01-23
DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.
The Five A's: what do patients want after an adverse event?
Cox, Wendy
2007-01-01
After an adverse event, Five A's: Acknowledgment, Apology, All the Facts, Assurance and Appropriate Compensation, serve to meet the essential needs of patients and their families. This simple mnemonic creates a clear framework of understanding for the actions health professionals need to take to manage errors and adverse events in an empathic and patient-oriented fashion. While not all patients demand or need compensation, most need at least the first four A's. Patient-centered communication using this simple framework following an adverse event will foster a climate of understanding and frank discussion, addressing the emotional and physical needs of the whole patient and family.
Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin
2018-03-01
In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.
Colgan, Matthew S; Asner, Gregory P; Swemmer, Tony
2013-07-01
Tree biomass is an integrated measure of net growth and is critical for understanding, monitoring, and modeling ecosystem functions. Despite the importance of accurately measuring tree biomass, several fundamental barriers preclude direct measurement at large spatial scales, including the facts that trees must be felled to be weighed and that even modestly sized trees are challenging to maneuver once felled. Allometric methods allow for estimation of tree mass using structural characteristics, such as trunk diameter. Savanna trees present additional challenges, including limited available allometry and a prevalence of multiple stems per individual. Here we collected airborne lidar data over a semiarid savanna adjacent to the Kruger National Park, South Africa, and then harvested and weighed woody plant biomass at the plot scale to provide a standard against which field and airborne estimation methods could be compared. For an existing airborne lidar method, we found that half of the total error was due to averaging canopy height at the plot scale. This error was eliminated by instead measuring maximum height and crown area of individual trees from lidar data using an object-based method to identify individual tree crowns and estimate their biomass. The best object-based model approached the accuracy of field allometry at both the tree and plot levels, and it more than doubled the accuracy compared to existing airborne methods (17% vs. 44% deviation from harvested biomass). Allometric error accounted for less than one-third of the total residual error in airborne biomass estimates at the plot scale when using allometry with low bias. Airborne methods also gave more accurate predictions at the plot level than did field methods based on diameter-only allometry. These results provide a novel comparison of field and airborne biomass estimates using harvested plots and advance the role of lidar remote sensing in savanna ecosystems.
Advancing the understanding of plasma transport in mid-size stellarators
NASA Astrophysics Data System (ADS)
Hidalgo, Carlos; Talmadge, Joseph; Ramisch, Mirko; TJ-II, the; HXS; TJ-K Teams
2017-01-01
The tokamak and the stellarator are the two main candidate concepts for magnetically confining fusion plasmas. The flexibility of the mid-size stellarator devices together with their unique diagnostic capabilities make them ideally suited to study the relation between magnetic topology, electric fields and transport. This paper addresses advances in the understanding of plasma transport in mid-size stellarators with an emphasis on the physics of flows, transport control, impurity and particle transport and fast particles. The results described here emphasize an improved physics understanding of phenomena in stellarators that complements the empirical approach. Experiments in mid-size stellarators support the development of advanced plasma scenarios in Wendelstein 7-X (W7-X) and, in concert with better physics understanding in tokamaks, may ultimately lead to an advance in the prediction of burning plasma behaviour.
NASA Astrophysics Data System (ADS)
Tuminaro, Jonathan
Many introductory, algebra-based physics students perform poorly on mathematical problem solving tasks in physics. There are at least two possible, distinct reasons for this poor performance: (1) students simply lack the mathematical skills needed to solve problems in physics, or (2) students do not know how to apply the mathematical skills they have to particular problem situations in physics. While many students do lack the requisite mathematical skills, a major finding from this work is that the majority of students possess the requisite mathematical skills, yet fail to use or interpret them in the context of physics. In this thesis I propose a theoretical framework to analyze and describe students' mathematical thinking in physics. In particular, I attempt to answer two questions. What are the cognitive tools involved in formal mathematical thinking in physics? And, why do students make the kinds of mistakes they do when using mathematics in physics? According to the proposed theoretical framework there are three major theoretical constructs: mathematical resources, which are the knowledge elements that are activated in mathematical thinking and problem solving; epistemic games, which are patterns of activities that use particular kinds of knowledge to create new knowledge or solve a problem; and frames, which are structures of expectations that determine how individuals interpret situations or events. The empirical basis for this study comes from videotaped sessions of college students solving homework problems. The students are enrolled in an algebra-based introductory physics course. The videotapes were transcribed and analyzed using the aforementioned theoretical framework. Two important results from this work are: (1) the construction of a theoretical framework that offers researchers a vocabulary (ontological classification of cognitive structures) and grammar (relationship between the cognitive structures) for understanding the nature and origin of mathematical use in the context physics, and (2) a detailed understanding, in terms of the proposed theoretical framework, of the errors that students make when using mathematics in the context of physics.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Fixing Stellarator Magnetic Surfaces
NASA Astrophysics Data System (ADS)
Hanson, James D.
1999-11-01
Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.
In quest of axionic hairs in quasars
NASA Astrophysics Data System (ADS)
Banerjee, Indrani; Mandal, Bhaswati; SenGupta, Soumitra
2018-03-01
The presence of axionic field can provide plausible explanation to several long standing problems in physics such as dark matter and dark energy. The pseudo-scalar axion whose derivative corresponds to the Hodge dual of the Kalb-Ramond field strength in four dimensions plays crucial roles in explaining several astrophysical and cosmological observations. Therefore, the detection of axionic hairs/Kalb-Ramond field which appears as closed string excitations in the heterotic string spectrum may provide a profound insight to our understanding of the current universe. The current level of precision achieved in solar-system based tests employed to test general relativity, is not sufficient to detect the presence of axion. However, the near horizon regime of quasars where the curvature effects are maximum seems to be a natural laboratory to probe such additions to the matter sector. The continuum spectrum emitted from the accretion disk around quasars encapsulates the imprints of the background spacetime and hence acts as a storehouse of information regarding the nature of gravitational interaction in extreme situations. The surfeit of data available in the electromagnetic domain provides a further motivation to explore such systems. Using the optical data for eighty Palomar Green quasars we demonstrate that the theoretical estimates of optical luminosity explain the observations best when the axionic field is assumed to be absent. However, axion which violates the energy condition seems to be favored by observations which has several interesting consequences. Error estimators, including reduced χ2, Nash-Sutcliffe efficiency, index of agreement and modified versions of the last two are used to solidify our conclusion and the implications of our result are discussed.
Accreting neutron stars, black holes, and degenerate dwarf stars.
Pines, D
1980-02-08
During the past 8 years, extended temporal and broadband spectroscopic studies carried out by x-ray astronomical satellites have led to the identification of specific compact x-ray sources as accreting neutron stars, black holes, and degenerate dwarf stars in close binary systems. Such sources provide a unique opportunity to study matter under extreme conditions not accessible in the terrestrial laboratory. Quantitative theoretical models have been developed which demonstrate that detailed studies of these sources will lead to a greatly increased understanding of dense and superdense hadron matter, hadron superfluidity, high-temperature plasma in superstrong magnetic fields, and physical processes in strong gravitational fields. Through a combination of theory and observation such studies will make possible the determination of the mass, radius, magnetic field, and structure of neutron stars and degenerate dwarf stars and the identification of further candidate black holes, and will contribute appreciably to our understanding of the physics of accretion by compact astronomical objects.
NASA Astrophysics Data System (ADS)
Lieb, Verena; Schmidt, Michael; Willberg, Martin; Pail, Roland
2017-04-01
Precise height systems require high-resolution and high-quality gravity data. However, such data sets are sparse especially in developing or newly industrializing countries. Thus, we initiated the DFG-project "ORG4heights" for the formulation of a general scientific concept how to (1) optimally combine all available data sets and (2) estimate realistic errors. The resulting regional gravity field models then deliver the fundamental basis for (3) establishing physical national height systems. The innovative key aspects of the project incorporate the development of a method which links (low- up to mid-resolution) gravity satellite mission data and (high- down to low-quality) terrestrial data. Hereby, an optimal combination of the data utilizing their highest measure of information including uncertainty quantification and analyzing systematic omission errors is pursued. Regional gravity field modeling via Multi-Resolution Representation (MRR) and Least Squares Collocation (LSC) are studied in detail and compared based on their theoretical fundamentals. From the findings, MRR shall be further developed towards implementing a pyramid algorithm. Within the project, we investigate comprehensive case studies in Saudi Arabia and South America, i. e. regions with varying topography, by means of simulated data with heterogeneous distribution, resolution, quality and altitude. GPS and tide gauge records serve as complementary input or validation data. The resulting products include error propagation, internal and external validation. A generalized concept then is derived in order to establish physical height systems in developing countries. The recommendations may serve as guidelines for sciences and administration. We present the ideas and strategies of the project, which combines methodical development and practical applications with high socio-economic impact.
Student Difficulties Regarding Symbolic and Graphical Representations of Vector Fields
ERIC Educational Resources Information Center
Bollen, Laurens; van Kampen, Paul; Baily, Charles; Kelly, Mossy; De Cock, Mieke
2017-01-01
The ability to switch between various representations is an invaluable problem-solving skill in physics. In addition, research has shown that using multiple representations can greatly enhance a person's understanding of mathematical and physical concepts. This paper describes a study of student difficulties regarding interpreting, constructing,…
Physics Teaching: Mathematics as an Epistemological Tool
ERIC Educational Resources Information Center
Kneubil, Fabiana B.; Robilotta, Manoel R.
2015-01-01
We study the interconnection between Physics and Mathematics in concrete instances, departing from the usual expression for the Coulomb electric field, produced by a point-like charge. It is scrutinized by means of six epistemology-intensive questions and radical answers are proposed, intended to widen one's understanding of the subject. Our…
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villarreal, Oscar D.; Yu, Lili; Department of Laboratory Medicine, Yancheng Vocational Institute of Health Sciences, Yancheng, Jiangsu 224006
Computing the ligand-protein binding affinity (or the Gibbs free energy) with chemical accuracy has long been a challenge for which many methods/approaches have been developed and refined with various successful applications. False positives and, even more harmful, false negatives have been and still are a common occurrence in practical applications. Inevitable in all approaches are the errors in the force field parameters we obtain from quantum mechanical computation and/or empirical fittings for the intra- and inter-molecular interactions. These errors propagate to the final results of the computed binding affinities even if we were able to perfectly implement the statistical mechanicsmore » of all the processes relevant to a given problem. And they are actually amplified to various degrees even in the mature, sophisticated computational approaches. In particular, the free energy perturbation (alchemical) approaches amplify the errors in the force field parameters because they rely on extracting the small differences between similarly large numbers. In this paper, we develop a hybrid steered molecular dynamics (hSMD) approach to the difficult binding problems of a ligand buried deep inside a protein. Sampling the transition along a physical (not alchemical) dissociation path of opening up the binding cavity- -pulling out the ligand- -closing back the cavity, we can avoid the problem of error amplifications by not relying on small differences between similar numbers. We tested this new form of hSMD on retinol inside cellular retinol-binding protein 1 and three cases of a ligand (a benzylacetate, a 2-nitrothiophene, and a benzene) inside a T4 lysozyme L99A/M102Q(H) double mutant. In all cases, we obtained binding free energies in close agreement with the experimentally measured values. This indicates that the force field parameters we employed are accurate and that hSMD (a brute force, unsophisticated approach) is free from the problem of error amplification suffered by many sophisticated approaches in the literature.« less
Towards clinically translatable in vivo nanodiagnostics
NASA Astrophysics Data System (ADS)
Park, Seung-Min; Aalipour, Amin; Vermesh, Ophir; Yu, Jung Ho; Gambhir, Sanjiv S.
2017-05-01
Nanodiagnostics as a field makes use of fundamental advances in nanobiotechnology to diagnose, characterize and manage disease at the molecular scale. As these strategies move closer to routine clinical use, a proper understanding of different imaging modalities, relevant biological systems and physical properties governing nanoscale interactions is necessary to rationally engineer next-generation bionanomaterials. In this Review, we analyse the background physics of several clinically relevant imaging modalities and their associated sensitivity and specificity, provide an overview of the materials currently used for in vivo nanodiagnostics, and assess the progress made towards clinical translation. This work provides a framework for understanding both the impressive progress made thus far in the nanodiagnostics field as well as presenting challenges that must be overcome to obtain widespread clinical adoption.
Developing and Implementing the Data Mining Algorithms in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less
Learning Kinematic Constraints in Laparoscopic Surgery
Huang, Felix C.; Mussa-Ivaldi, Ferdinando A.; Pugh, Carla M.; Patton, James L.
2012-01-01
To better understand how kinematic variables impact learning in surgical training, we devised an interactive environment for simulated laparoscopic maneuvers, using either 1) mechanical constraints typical of a surgical “box-trainer” or 2) virtual constraints in which free hand movements control virtual tool motion. During training, the virtual tool responded to the absolute position in space (Position-Based) or the orientation (Orientation-Based) of a hand-held sensor. Volunteers were further assigned to different sequences of target distances (Near-Far-Near or Far-Near-Far). Training with the Orientation-Based constraint enabled much lower path error and shorter movement times during training, which suggests that tool motion that simply mirrors joint motion is easier to learn. When evaluated in physically constrained (physical box-trainer) conditions, each group exhibited improved performance from training. However, Position-Based training enabled greater reductions in movement error relative to Orientation-Based (mean difference: 14.0 percent; CI: 0.7, 28.6). Furthermore, the Near-Far-Near schedule allowed a greater decrease in task time relative to the Far-Near-Far sequence (mean −13:5 percent, CI: −19:5, −7:5). Training that focused on shallow tool insertion (near targets) might promote more efficient movement strategies by emphasizing the curvature of tool motion. In addition, our findings suggest that an understanding of absolute tool position is critical to coping with mechanical interactions between the tool and trocar. PMID:23293709
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2017-12-01
Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.
Visualization of 3-D tensor fields
NASA Technical Reports Server (NTRS)
Hesselink, L.
1996-01-01
Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.
The effectiveness of pretreatment physics plan review for detecting errors in radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, Olga; Zeng, Jing; Novak, Avrey
Purpose: The pretreatment physics plan review is a standard tool for ensuring treatment quality. Studies have shown that the majority of errors in radiation oncology originate in treatment planning, which underscores the importance of the pretreatment physics plan review. This quality assurance measure is fundamentally important and central to the safety of patients and the quality of care that they receive. However, little is known about its effectiveness. The purpose of this study was to analyze reported incidents to quantify the effectiveness of the pretreatment physics plan review with the goal of improving it. Methods: This study analyzed 522 potentiallymore » severe or critical near-miss events within an institutional incident learning system collected over a three-year period. Of these 522 events, 356 originated at a workflow point that was prior to the pretreatment physics plan review. The remaining 166 events originated after the pretreatment physics plan review and were not considered in the study. The applicable 356 events were classified into one of the three categories: (1) events detected by the pretreatment physics plan review, (2) events not detected but “potentially detectable” by the physics review, and (3) events “not detectable” by the physics review. Potentially detectable events were further classified by which specific checks performed during the pretreatment physics plan review detected or could have detected the event. For these events, the associated specific check was also evaluated as to the possibility of automating that check given current data structures. For comparison, a similar analysis was carried out on 81 events from the international SAFRON radiation oncology incident learning system. Results: Of the 356 applicable events from the institutional database, 180/356 (51%) were detected or could have been detected by the pretreatment physics plan review. Of these events, 125 actually passed through the physics review; however, only 38% (47/125) were actually detected at the review. Of the 81 events from the SAFRON database, 66/81 (81%) were potentially detectable by the pretreatment physics plan review. From the institutional database, three specific physics checks were particularly effective at detecting events (combined effectiveness of 38%): verifying the isocenter (39/180), verifying DRRs (17/180), and verifying that the plan matched the prescription (12/180). The most effective checks from the SAFRON database were verifying that the plan matched the prescription (13/66) and verifying the field parameters in the record and verify system against those in the plan (23/66). Software-based plan checking systems, if available, would have potential effectiveness of 29% and 64% at detecting events from the institutional and SAFRON databases, respectively. Conclusions: Pretreatment physics plan review is a key safety measure and can detect a high percentage of errors. However, the majority of errors that potentially could have been detected were not detected in this study, indicating the need to improve the pretreatment physics plan review performance. Suggestions for improvement include the automation of specific physics checks performed during the pretreatment physics plan review and the standardization of the review process.« less
The effectiveness of pretreatment physics plan review for detecting errors in radiation therapy.
Gopan, Olga; Zeng, Jing; Novak, Avrey; Nyflot, Matthew; Ford, Eric
2016-09-01
The pretreatment physics plan review is a standard tool for ensuring treatment quality. Studies have shown that the majority of errors in radiation oncology originate in treatment planning, which underscores the importance of the pretreatment physics plan review. This quality assurance measure is fundamentally important and central to the safety of patients and the quality of care that they receive. However, little is known about its effectiveness. The purpose of this study was to analyze reported incidents to quantify the effectiveness of the pretreatment physics plan review with the goal of improving it. This study analyzed 522 potentially severe or critical near-miss events within an institutional incident learning system collected over a three-year period. Of these 522 events, 356 originated at a workflow point that was prior to the pretreatment physics plan review. The remaining 166 events originated after the pretreatment physics plan review and were not considered in the study. The applicable 356 events were classified into one of the three categories: (1) events detected by the pretreatment physics plan review, (2) events not detected but "potentially detectable" by the physics review, and (3) events "not detectable" by the physics review. Potentially detectable events were further classified by which specific checks performed during the pretreatment physics plan review detected or could have detected the event. For these events, the associated specific check was also evaluated as to the possibility of automating that check given current data structures. For comparison, a similar analysis was carried out on 81 events from the international SAFRON radiation oncology incident learning system. Of the 356 applicable events from the institutional database, 180/356 (51%) were detected or could have been detected by the pretreatment physics plan review. Of these events, 125 actually passed through the physics review; however, only 38% (47/125) were actually detected at the review. Of the 81 events from the SAFRON database, 66/81 (81%) were potentially detectable by the pretreatment physics plan review. From the institutional database, three specific physics checks were particularly effective at detecting events (combined effectiveness of 38%): verifying the isocenter (39/180), verifying DRRs (17/180), and verifying that the plan matched the prescription (12/180). The most effective checks from the SAFRON database were verifying that the plan matched the prescription (13/66) and verifying the field parameters in the record and verify system against those in the plan (23/66). Software-based plan checking systems, if available, would have potential effectiveness of 29% and 64% at detecting events from the institutional and SAFRON databases, respectively. Pretreatment physics plan review is a key safety measure and can detect a high percentage of errors. However, the majority of errors that potentially could have been detected were not detected in this study, indicating the need to improve the pretreatment physics plan review performance. Suggestions for improvement include the automation of specific physics checks performed during the pretreatment physics plan review and the standardization of the review process.
On the reach of perturbative descriptions for dark matter displacement fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldauf, Tobias; Zaldarriaga, Matias; Schaan, Emmanuel, E-mail: baldauf@ias.edu, E-mail: eschaan@astro.princeton.edu, E-mail: matiasz@ias.edu
We study Lagrangian Perturbation Theory (LPT) and its regularization in the Effective Field Theory (EFT) approach. We evaluate the LPT displacement with the same phases as a corresponding N-body simulation, which allows us to compare perturbation theory to the non-linear simulation with significantly reduced cosmic variance, and provides a more stringent test than simply comparing power spectra. We reliably detect a non-vanishing leading order EFT coefficient and a stochastic displacement term, uncorrelated with the LPT terms. This stochastic term is expected in the EFT framework, and, to the best of our understanding, is not an artifact of numerical errors ormore » transients in our simulations. This term constitutes a limit to the accuracy of perturbative descriptions of the displacement field and its phases, corresponding to a 1% error on the non-linear power spectrum at k = 0.2 h{sup −1}Mpc at z = 0. Predicting the displacement power spectrum to higher accuracy or larger wavenumbers thus requires a model for the stochastic displacement.« less
Domain Wall Evolution in Phase Transforming Oxides
2015-01-14
configumtions under driving forces (e.g. changes in temperature and electric fields) in an effort to: 1) understand the underlying linkage between -1...configurations under driving forces (e.g. changes in temperature and electric fields) in an effort to: 1) understand the underlying linkage between...Extensive domain wall motion and deaging resistance in morphotropic 0.55Bi(Ni1/2Ti1/2)O3–0.45PbTiO3 polycrystalline ferroelectrics, Applied Physics Letters
Report of the panel on geopotential fields: Magnetic field, section 9
NASA Technical Reports Server (NTRS)
Achache, Jose J.; Backus, George E.; Benton, Edward R.; Harrison, Christopher G. A.; Langel, Robert A.
1991-01-01
The objective of the NASA Geodynamics program for magnetic field measurements is to study the physical state, processes and evolution of the Earth and its environment via interpretation of measurements of the near Earth magnetic field in conjunction with other geophysical data. The fields measured derive from sources in the core, the lithosphere, the ionosphere, and the magnetosphere. Panel recommendations include initiation of multi-decade long continuous scalar and vector measurements of the Earth's magnetic field by launching a five year satellite mission to measure the field to about 1 nT accuracy, improvement of our resolution of the lithographic component of the field by developing a low altitude satellite mission, and support of theoretical studies and continuing analysis of data to better understand the source physics and improve the modeling capabilities for different source regions.
NASA Astrophysics Data System (ADS)
Yeoh, S. K.; Li, Z.; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.; Levin, D. A.
2014-12-01
The Enceladus ice/vapor plume not only accounts for the various features observed in the Saturnian system, such as the E-ring, the narrow neutral H2O torus, and Enceladus' own bright albedo, but also raises exciting new possibilities, including the existence of liquid water on Enceladus. Therefore, understanding the plume and its physics is important. Here we assume that the plume arises from flow expansion within multiple narrow subsurface cracks connected to reservoirs of liquid water underground, and simulate this expanding flow from the underground reservoir out to several Enceladus radii where Cassini data are available for comparison. The direct simulation Monte Carlo (DSMC) method is used to simulate the subsurface and near-field collisional regions and a free-molecular model is used to propagate the plume out into the far-field. We include the following physical processes in our simulations: the flow interaction with the crack walls, grain condensation from the vapor phase, non-equilibrium effects (e.g. freezing of molecular internal energy modes), the interaction between the vapor and the ice grains, the gravitational fields of Enceladus and Saturn, and Coriolis and centrifugal forces (due to motion in non-inertial reference frame). The end result is a plume model that includes the relevant physics of the flow from the underground source out to where Cassini measurements are taken. We have made certain assumptions about the channel geometry and reservoir conditions. The model is constrained using various available Cassini data (particularly those of INMS, CDA and UVIS) to understand the plume physics as well as estimate the vapor and grain production rates and its temporal variability.
Franke, Jörg; Brönnimann, Stefan; Bhend, Jonas; Brugnara, Yuri
2017-01-01
Climatic variations at decadal scales such as phases of accelerated warming or weak monsoons have profound effects on society and economy. Studying these variations requires insights from the past. However, most current reconstructions provide either time series or fields of regional surface climate, which limit our understanding of the underlying dynamics. Here, we present the first monthly paleo-reanalysis covering the period 1600 to 2005. Over land, instrumental temperature and surface pressure observations, temperature indices derived from historical documents and climate sensitive tree-ring measurements were assimilated into an atmospheric general circulation model ensemble using a Kalman filtering technique. This data set combines the advantage of traditional reconstruction methods of being as close as possible to observations with the advantage of climate models of being physically consistent and having 3-dimensional information about the state of the atmosphere for various variables and at all points in time. In contrast to most statistical reconstructions, centennial variability stems from the climate model and its forcings, no stationarity assumptions are made and error estimates are provided. PMID:28585926
Virtual reality simulation for the optimization of endovascular procedures: current perspectives.
Rudarakanchana, Nung; Van Herzeele, Isabelle; Desender, Liesbeth; Cheshire, Nicholas J W
2015-01-01
Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR) simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin
2016-01-01
An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
Paul, David R; McGrath, Ryan; Vella, Chantal A; Kramer, Matthew; Baer, David J; Moshfegh, Alanna J
2018-03-26
The National Health and Nutrition Examination Survey physical activity questionnaire (PAQ) is used to estimate activity energy expenditure (AEE) and moderate to vigorous physical activity (MVPA). Bias and variance in estimates of AEE and MVPA from the PAQ have not been described, nor the impact of measurement error when utilizing the PAQ to predict biomarkers and categorize individuals. The PAQ was administered to 385 adults to estimate AEE (AEE:PAQ) and MVPA (MVPA:PAQ), while simultaneously measuring AEE with doubly labeled water (DLW; AEE:DLW) and MVPA with an accelerometer (MVPA:A). Although AEE:PAQ [3.4 (2.2) MJ·d -1 ] was not significantly different from AEE:DLW [3.6 (1.6) MJ·d -1 ; P > .14], MVPA:PAQ [36.2 (24.4) min·d -1 ] was significantly higher than MVPA:A [8.0 (10.4) min·d -1 ; P < .0001]. AEE:PAQ regressed on AEE:DLW and MVPA:PAQ regressed on MVPA:A yielded not only significant positive relationships but also large residual variances. The relationships between AEE and MVPA, and 10 of the 12 biomarkers were underestimated by the PAQ. When compared with accelerometers, the PAQ overestimated the number of participants who met the Physical Activity Guidelines for Americans. Group-level bias in AEE:PAQ was small, but large for MVPA:PAQ. Poor within-participant estimates of AEE:PAQ and MVPA:PAQ lead to attenuated relationships with biomarkers and misclassifications of participants who met or who did not meet the Physical Activity Guidelines for Americans.
Plasma physics and the 2013-2022 decadal survey in solar and space physics
NASA Astrophysics Data System (ADS)
Baker, Daniel N.
2016-11-01
The U.S. National Academies established in 2011 a steering committee to develop a comprehensive strategy for solar and space physics research. This updated and extended the first (2003) solar and space physics decadal survey. The latest decadal study implemented a 2008 Congressional directive to NASA for the fields of solar and space physics, but also addressed research in other federal agencies. The new survey broadly canvassed the fields of research to determine the current state of the discipline, identified the most important open scientific questions, and proposed the measurements and means to obtain them so as to advance the state of knowledge during the years 2013-2022. Research in this field has sought to understand: dynamical behaviour of the Sun and its heliosphere; properties of the space environments of the Earth and other solar system bodies; multiscale interaction between solar system plasmas and the interstellar medium; and energy transport throughout the solar system and its impact on the Earth and other solar system bodies. Research in solar and space plasma processes using observation, theory, laboratory studies, and numerical models has offered the prospect of understanding this interconnected system well enough to develop a predictive capability for operational support of civil and military space systems. We here describe the recommendations and strategic plans laid out in the 2013-2022 decadal survey as they relate to measurement capabilities and plasma physical research. We assess progress to date. We also identify further steps to achieve the Survey goals with an emphasis on plasma physical aspects of the program.
NASA Astrophysics Data System (ADS)
Qin, Xuerong; van Sebille, Erik; Sen Gupta, Alexander
2014-04-01
Lagrangian particle tracking within ocean models is an important tool for the examination of ocean circulation, ventilation timescales and connectivity and is increasingly being used to understand ocean biogeochemistry. Lagrangian trajectories are obtained by advecting particles within velocity fields derived from hydrodynamic ocean models. For studies of ocean flows on scales ranging from mesoscale up to basin scales, the temporal resolution of the velocity fields should ideally not be more than a few days to capture the high frequency variability that is inherent in mesoscale features. However, in reality, the model output is often archived at much lower temporal resolutions. Here, we quantify the differences in the Lagrangian particle trajectories embedded in velocity fields of varying temporal resolution. Particles are advected from 3-day to 30-day averaged fields in a high-resolution global ocean circulation model. We also investigate whether adding lateral diffusion to the particle movement can compensate for the reduced temporal resolution. Trajectory errors reveal the expected degradation of accuracy in the trajectory positions when decreasing the temporal resolution of the velocity field. Divergence timescales associated with averaging velocity fields up to 30 days are faster than the intrinsic dispersion of the velocity fields but slower than the dispersion caused by the interannual variability of the velocity fields. In experiments focusing on the connectivity along major currents, including western boundary currents, the volume transport carried between two strategically placed sections tends to increase with increased temporal averaging. Simultaneously, the average travel times tend to decrease. Based on these two bulk measured diagnostics, Lagrangian experiments that use temporal averaging of up to nine days show no significant degradation in the flow characteristics for a set of six currents investigated in more detail. The addition of random-walk-style diffusion does not mitigate the errors introduced by temporal averaging for large-scale open ocean Lagrangian simulations.
Physical experience enhances science learning.
Kontra, Carly; Lyons, Daniel J; Fischer, Susan M; Beilock, Sian L
2015-06-01
Three laboratory experiments involving students' behavior and brain imaging and one randomized field experiment in a college physics class explored the importance of physical experience in science learning. We reasoned that students' understanding of science concepts such as torque and angular momentum is aided by activation of sensorimotor brain systems that add kinetic detail and meaning to students' thinking. We tested whether physical experience with angular momentum increases involvement of sensorimotor brain systems during students' subsequent reasoning and whether this involvement aids their understanding. The physical experience, a brief exposure to forces associated with angular momentum, significantly improved quiz scores. Moreover, improved performance was explained by activation of sensorimotor brain regions when students later reasoned about angular momentum. This finding specifies a mechanism underlying the value of physical experience in science education and leads the way for classroom practices in which experience with the physical world is an integral part of learning. © The Author(s) 2015.
High school students' representations and understandings of electric fields
NASA Astrophysics Data System (ADS)
Cao, Ying; Brizuela, Bárbara M.
2016-12-01
This study investigates the representations and understandings of electric fields expressed by Chinese high school students 15 to 16 years old who have not received high school level physics instruction. The physics education research literature has reported students' conceptions of electric fields postinstruction as indicated by students' performance on textbook-style questions. It has, however, inadequately captured student ideas expressed in other situations yet informative to educational research. In this study, we explore students' ideas of electric fields preinstruction as shown by students' representations produced in open-ended activities. 92 participant students completed a worksheet that involved drawing comic strips about electric charges as characters of a cartoon series. Three students who had spontaneously produced arrow diagrams were interviewed individually after class. We identified nine ideas related to electric fields that these three students spontaneously leveraged in the comic strip activity. In this paper, we describe in detail each idea and its situated context. As most research in the literature has understood students as having relatively fixed conceptions and mostly identified divergences in those conceptions from canonical targets, this study shows students' reasoning to be more variable in particular moments, and that variability includes common sense resources that can be productive for learning about electric fields.
2015-09-30
understanding of coastal oceanography by means of applying simple dynamical theories to high-quality observations obtained in the field. My primary...area of expertise is physical oceanography , but I also enjoy collaborating with biological, chemical, acoustical, and optical oceanographers to work
On the Longitudinal Component of Paraxial Fields
ERIC Educational Resources Information Center
Carnicer, Artur; Juvells, Ignasi; Maluenda, David; Martinez-Herrero, Rosario; Mejias, Pedro M.
2012-01-01
The analysis of paraxial Gaussian beams features in most undergraduate courses in laser physics, advanced optics and photonics. These beams provide a simple model of the field generated in the resonant cavities of lasers, thus constituting a basic element for understanding laser theory. Usually, uniformly polarized beams are considered in the…
Magnetic field errors tolerances of Nuclotron booster
NASA Astrophysics Data System (ADS)
Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet
2018-04-01
Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.
Enabling full-field physics-based optical proximity correction via dynamic model generation
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas
2017-07-01
As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.
ERIC Educational Resources Information Center
Forster, Patricia A.
2004-01-01
Interpretation and construction of graphs are central to the study of physics and to performance in physics. In this paper, I explore the interpretation and construction processes called upon in questions with a graphical component, in Western Australian Physics Tertiary Entrance Examinations. In addition, I list errors made by students as…
Multi-Wavelength Imaging of Solar Plasma - High-Beta Disruption Model of Solar Flares -
NASA Astrophysics Data System (ADS)
Shibasaki, Kiyoto
Solar atmosphere is filled with plasma and magnetic field. Activities in the atmosphere are due to plasma instabilities in the magnetic field. To understand the physical mechanisms of activities / instabilities, it is necessary to know the physical conditions of magnetized plasma, such as temperature, density, magnetic field, and their spatial structures and temporal developments. Multi-wavelength imaging is essential for this purpose. Imaging observations of the Sun at microwave, X-ray, EUV and optical ranges are routinely going on. Due to free exchange of original data among solar physics and related field communities, we can easily combine images covering wide range of spectrum. Even under such circumstances, we still do not understand the cause of activities in the solar atmosphere well. The current standard model of solar activities is based on magnetic reconnection: release of stored magnetic energy by reconnection is the cause of solar activities on the Sun such as solar flares. However, recent X-ray, EUV and microwave observations with high spatial and temporal resolution show that dense plasma is involved in activities from the beginning. Based on these observations, I propose a high-beta model of solar activities, which is very similar to high-beta disruptions in magnetically confined fusion experiments.
The Magnetic Origins of Solar Activity
NASA Technical Reports Server (NTRS)
Antiochos, S. K.
2012-01-01
The defining physical property of the Sun's corona is that the magnetic field dominates the plasma. This property is the genesis for all solar activity ranging from quasi-steady coronal loops to the giant magnetic explosions observed as coronal mass ejections/eruptive flares. The coronal magnetic field is also the fundamental driver of all space weather; consequently, understanding the structure and dynamics of the field, especially its free energy, has long been a central objective in Heliophysics. The main obstacle to achieving this understanding has been the lack of accurate direct measurements of the coronal field. Most attempts to determine the magnetic free energy have relied on extrapolation of photospheric measurements, a notoriously unreliable procedure. In this presentation I will discuss what measurements of the coronal field would be most effective for understanding solar activity. Not surprisingly, the key process for driving solar activity is magnetic reconnection. I will discuss, therefore, how next-generation measurements of the coronal field will allow us to understand not only the origins of space weather, but also one of the most important fundamental processes in cosmic and laboratory plasmas.
Lee, Kuo Hao; Chen, Jianhan
2017-06-15
Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Halftoning and Image Processing Algorithms
1999-02-01
screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our
Chae, Yoojin; Goodman, Gail S; Eisen, Mitchell L; Qin, Jianjian
2011-12-01
This study examined event memory and suggestibility in 3- to 16-year-olds involved in forensic investigations of child maltreatment. A total of 322 children were interviewed about a play activity with an unfamiliar adult. Comprehensive measures of individual differences in trauma-related psychopathology and cognitive functioning were administered. Sexually and/or physically abused children obtained higher dissociation scores than neglected children, and sexually abused children were more likely to obtain a diagnosis of posttraumatic stress disorder than physically abused children, neglected children, and children with no substantiated abuse histories. Overall, older children and children with better cognitive functioning produced more correct information and fewer memory errors. Abuse status per se did not significantly predict children's memory or suggestibility whether considered alone or in interaction with age. However, among highly dissociative children, more trauma symptoms were associated with greater inaccuracy, whereas trauma symptoms were not associated with increased error for children who were lower in dissociative tendencies. Implications of the findings for understanding eyewitness memory in maltreated children are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.
2009-09-01
The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.
NASA Astrophysics Data System (ADS)
Loepp, Susan; Wootters, William K.
2006-09-01
For many everyday transmissions, it is essential to protect digital information from noise or eavesdropping. This undergraduate introduction to error correction and cryptography is unique in devoting several chapters to quantum cryptography and quantum computing, thus providing a context in which ideas from mathematics and physics meet. By covering such topics as Shor's quantum factoring algorithm, this text informs the reader about current thinking in quantum information theory and encourages an appreciation of the connections between mathematics and science.Of particular interest are the potential impacts of quantum physics:(i) a quantum computer, if built, could crack our currently used public-key cryptosystems; and (ii) quantum cryptography promises to provide an alternative to these cryptosystems, basing its security on the laws of nature rather than on computational complexity. No prior knowledge of quantum mechanics is assumed, but students should have a basic knowledge of complex numbers, vectors, and matrices. Accessible to readers familiar with matrix algebra, vector spaces and complex numbers First undergraduate text to cover cryptography, error-correction, and quantum computation together Features exercises designed to enhance understanding, including a number of computational problems, available from www.cambridge.org/9780521534765
Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes
NASA Astrophysics Data System (ADS)
Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.
2015-12-01
H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.
Emotion recognition in fathers and mothers at high-risk for child physical abuse.
Asla, Nagore; de Paúl, Joaquín; Pérez-Albéniz, Alicia
2011-09-01
The present study was designed to determine whether parents at high risk for physical child abuse, in comparison with parents at low risk, show deficits in emotion recognition, as well as to examine the moderator effect of gender and stress on the relationship between risk for physical child abuse and emotion recognition. Based on their scores on the Abuse Scale of the CAP Inventory (Milner, 1986), 64 parents at high risk (24 fathers and 40 mothers) and 80 parents at low risk (40 fathers and 40 mothers) for physical child abuse were selected. The Subtle Expression Training Tool/Micro Expression Training Tool (Ekman, 2004a, 2004b) and the Diagnostic Analysis of Nonverbal Accuracy II (Nowicki & Carton, 1993) were used to assess emotion recognition. As expected, parents at high risk, in contrast to parents at low risk, showed deficits in emotion recognition. However, differences between high- and low-risk participants were observed only for fathers, but not for mothers. Whereas fathers at high risk for physical child abuse made more errors than mothers at high risk, no differences between mothers at low risk and fathers at low risk were found. No interaction between stress, gender, and risk status was observed for errors in emotion recognition. The present findings, if confirmed with physical abusers, could be helpful to further our understanding of deficits in processing information of physically abusive parents and to develop treatment strategies specifically focused on emotion recognition. Moreover, if gender differences can be confirmed, the findings could be helpful to develop specific treatment programs for abusive fathers. Copyright © 2011 Elsevier Ltd. All rights reserved.
WE-D-204-02: Errors and Process Improvements in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontenla, D.
2016-06-15
Speakers in this session will present overview and details of a specific rotation or feature of their Medical Physics Residency Program that is particularly exceptional and noteworthy. The featured rotations include foundational topics executed with exceptional acumen and innovative educational rotations perhaps not commonly found in Medical Physics Residency Programs. A site-specific clinical rotation will be described, where the medical physics resident follows the physician and medical resident for two weeks into patient consultations, simulation sessions, target contouring sessions, planning meetings with dosimetry, patient follow up visits, and tumor boards, to gain insight into the thought processes of the radiationmore » oncologist. An incident learning rotation will be described where the residents learns about and practices evaluating clinical errors and investigates process improvements for the clinic. The residency environment at a Canadian medical physics residency program will be described, where the training and interactions with radiation oncology residents is integrated. And the first month rotation will be described, where the medical physics resident rotates through the clinical areas including simulation, dosimetry, and treatment units, gaining an overview of the clinical flow and meeting all the clinical staff to begin the residency program. This session will be of particular interest to residency programs who are interested in adopting or adapting these curricular ideas into their programs and to residency candidates who want to learn about programs already employing innovative practices. Learning Objectives: To learn about exceptional and innovative clinical rotations or program features within existing Medical Physics Residency Programs. To understand how to adopt/adapt innovative curricular designs into your own Medical Physics Residency Program, if appropriate.« less
Understanding Student Use of Differentials in Physics Integration Problems
ERIC Educational Resources Information Center
Hu, Dehui; Rebello, N. Sanjay
2013-01-01
This study focuses on students' use of the mathematical concept of differentials in physics problem solving. For instance, in electrostatics, students need to set up an integral to find the electric field due to a charged bar, an activity that involves the application of mathematical differentials (e.g., "dr," "dq"). In this…
ERIC Educational Resources Information Center
Berei, Catherine P.; Pratt, Erica; Parker, Melissa; Shephard, Kevin; Liang, Tanjian; Nampai, Udon; Neamphoka, Guntima
2017-01-01
Purpose: Scholarship is essential for the growth and development of the physical education field. Over time, scholarship expectations have changed, forcing faculty members to alter time spent for research, teaching, and service. Social-cognitive career theory (SCCT) presents a model for understanding performance and persistence in an occupational…
ERIC Educational Resources Information Center
Sirna, K.; Tinning, R.; Rossi, T.
2010-01-01
This paper examines Initial Teacher Education students' experiences of participation in health and physical education (HPE) subject department offices and the impact on their understandings and identity formation. Pierre Bourdieu's concepts of habitus, field, and practice along with Wenger's communities of practice form the theoretical frame used…
High School Girls' Negotiation of Perceived Self-Efficacy and Science Course Trajectories
ERIC Educational Resources Information Center
Patterson, Jill Voorhees; Johnson, Ane Turner
2017-01-01
Sustainability issues have led to increased demands for a STEM-literate society and workforce. Potential contributors need to be competent, have an understanding of earth and physical sciences, and be willing to pursue such fields. High school girls, however, remain underrepresented in physical science course enrollments (College Board, 2014).…
Quantum channels and memory effects
NASA Astrophysics Data System (ADS)
Caruso, Filippo; Giovannetti, Vittorio; Lupo, Cosmo; Mancini, Stefano
2014-10-01
Any physical process can be represented as a quantum channel mapping an initial state to a final state. Hence it can be characterized from the point of view of communication theory, i.e., in terms of its ability to transfer information. Quantum information provides a theoretical framework and the proper mathematical tools to accomplish this. In this context the notion of codes and communication capacities have been introduced by generalizing them from the classical Shannon theory of information transmission and error correction. The underlying assumption of this approach is to consider the channel not as acting on a single system, but on sequences of systems, which, when properly initialized allow one to overcome the noisy effects induced by the physical process under consideration. While most of the work produced so far has been focused on the case in which a given channel transformation acts identically and independently on the various elements of the sequence (memoryless configuration in jargon), correlated error models appear to be a more realistic way to approach the problem. A slightly different, yet conceptually related, notion of correlated errors applies to a single quantum system which evolves continuously in time under the influence of an external disturbance which acts on it in a non-Markovian fashion. This leads to the study of memory effects in quantum channels: a fertile ground where interesting novel phenomena emerge at the intersection of quantum information theory and other branches of physics. A survey is taken of the field of quantum channels theory while also embracing these specific and complex settings.
Zavgorodni, S
2004-12-07
Inter-fraction dose fluctuations, which appear as a result of setup errors, organ motion and treatment machine output variations, may influence the radiobiological effect of the treatment even when the total delivered physical dose remains constant. The effect of these inter-fraction dose fluctuations on the biological effective dose (BED) has been investigated. Analytical expressions for the BED accounting for the dose fluctuations have been derived. The concept of biological effective constant dose (BECD) has been introduced. The equivalent constant dose (ECD), representing the constant physical dose that provides the same cell survival fraction as the fluctuating dose, has also been introduced. The dose fluctuations with Gaussian as well as exponential probability density functions were investigated. The values of BECD and ECD calculated analytically were compared with those derived from Monte Carlo modelling. The agreement between Monte Carlo modelled and analytical values was excellent (within 1%) for a range of dose standard deviations (0-100% of the dose) and the number of fractions (2 to 37) used in the comparison. The ECDs have also been calculated for conventional radiotherapy fields. The analytical expression for the BECD shows that BECD increases linearly with the variance of the dose. The effect is relatively small, and in the flat regions of the field it results in less than 1% increase of ECD. In the penumbra region of the 6 MV single radiotherapy beam the ECD exceeded the physical dose by up to 35%, when the standard deviation of combined patient setup/organ motion uncertainty was 5 mm. Equivalently, the ECD field was approximately 2 mm wider than the physical dose field. The difference between ECD and the physical dose is greater for normal tissues than for tumours.
Fault-tolerant quantum error detection.
Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher
2017-10-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.
Metabolic disorders causing childhood ataxia.
Parker, Colette C; Evans, Owen B
2003-09-01
Ataxia is a common neurologic finding in many disease processes of the nervous system, and has classically been associated with numerous metabolic disorders. An error of metabolism should be considered when the ataxia is either intermittent or progressive. Acute exacerbation or worsening after high protein ingestion, concurrent febrile illness, or other physical stress is also suggestive. A positive family history can be an important diagnostic clue. Progressive molecular and biochemical techniques are revolutionizing this area of medicine, and there has been rapid advancement in understanding of the disease processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
Magnetic fields are observed to exist on all scales in many astrophysical sources such as stars, galaxies, and accretion discs. Understanding the origin of large scale magnetic fields, whereby the field emerges on spatial scales large compared to the fluctuations, has been a particularly long standing challenge. Our physics objective are: 1) what are the minimum ingredients for large-scale dynamo growth? 2) could a large-scale magnetic field grow out of turbulence and sustained despite the presence of dissipation? These questions are fundamental for understanding the large-scale dynamo in both laboratory and astrophysical plasmas. Here, we report major new findings inmore » the area of Large-Scale Dynamo (magnetic field generation).« less
Surface emission from neutron stars and implications for the physics of their interiors.
Ozel, Feryal
2013-01-01
Neutron stars are associated with diverse physical phenomena that take place in conditions characterized by ultrahigh densities as well as intense gravitational, magnetic and radiation fields. Understanding the properties and interactions of matter in these regimes remains one of the challenges in compact object astrophysics. Photons emitted from the surfaces of neutron stars provide direct probes of their structure, composition and magnetic fields. In this review, I discuss in detail the physics that governs the properties of emission from the surfaces of neutron stars and their various observational manifestations. I present the constraints on neutron star radii, core and crust composition, and magnetic field strength and topology obtained from studies of their broadband spectra, evolution of thermal luminosity, and the profiles of pulsations that originate on their surfaces.
An undulator based soft x-ray source for microscopy on the Duke electron storage ring
NASA Astrophysics Data System (ADS)
Johnson, Lewis Elgin
1998-09-01
This dissertation describes the design, development, and installation of an undulator-based soft x-ray source on the Duke Free Electron Laser laboratory electron storage ring. Insertion device and soft x-ray beamline physics and technology are all discussed in detail. The Duke/NIST undulator is a 3.64-m long hybrid design constructed by the Brobeck Division of Maxwell Laboratories. Originally built for an FEL project at the National Institute of Standards and Technology, the undulator was acquired by Duke in 1992 for use as a soft x-ray source for the FEL laboratory. Initial Hall probe measurements on the magnetic field distribution of the undulator revealed field errors of more than 0.80%. Initial phase errors for the device were more than 11 degrees. Through a series of in situ and off-line measurements and modifications we have re-tuned the magnet field structure of the device to produce strong spectral characteristics through the 5th harmonic. A low operating K has served to reduce the effects of magnetic field errors on the harmonic spectral content. Although rms field errors remained at 0.75%, we succeeded in reducing phase errors to less than 5 degrees. Using trajectory simulations from magnetic field data, we have computed the spectral output given the interaction of the Duke storage ring electron beam and the NIST undulator. Driven by a series of concerns and constraints over maximum utility, personnel safety and funding, we have also constructed a unique front end beamline for the undulator. The front end has been designed for maximum throughput of the 1st harmonic around 40A in its standard mode of operation. The front end has an alternative mode of operation which transmits the 3rd and 5th harmonics. This compact system also allows for the extraction of some of the bend magnet produced synchrotron and transition radiation from the storage ring. As with any well designed front end system, it also provides excellent protection to personnel and to the storage ring. A diagnostic beamline consisting of a transmission grating spectrometer and scanning wire beam profile monitor was constructed to measure the spatial and spectral characteristics of the undulator radiation. Test of the system with a circulating electron beam has confirmed the magnetic and focusing properties of the undulator, and verified that it can be used without perturbing the orbit of the beam.
Understanding human management of automation errors
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042
Understanding human management of automation errors.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2014-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Stochastic estimation of plant-available soil water under fluctuating water table depths
NASA Astrophysics Data System (ADS)
Or, Dani; Groeneveld, David P.
1994-12-01
Preservation of native valley-floor phreatophytes while pumping groundwater for export from Owens Valley, California, requires reliable predictions of plant water use. These predictions are compared with stored soil water within well field regions and serve as a basis for managing groundwater resources. Soil water measurement errors, variable recharge, unpredictable climatic conditions affecting plant water use, and modeling errors make soil water predictions uncertain and error-prone. We developed and tested a scheme based on soil water balance coupled with implementation of Kalman filtering (KF) for (1) providing physically based soil water storage predictions with prediction errors projected from the statistics of the various inputs, and (2) reducing the overall uncertainty in both estimates and predictions. The proposed KF-based scheme was tested using experimental data collected at a location on the Owens Valley floor where the water table was artificially lowered by groundwater pumping and later allowed to recover. Vegetation composition and per cent cover, climatic data, and soil water information were collected and used for developing a soil water balance. Predictions and updates of soil water storage under different types of vegetation were obtained for a period of 5 years. The main results show that: (1) the proposed predictive model provides reliable and resilient soil water estimates under a wide range of external conditions; (2) the predicted soil water storage and the error bounds provided by the model offer a realistic and rational basis for decisions such as when to curtail well field operation to ensure plant survival. The predictive model offers a practical means for accommodating simple aspects of spatial variability by considering the additional source of uncertainty as part of modeling or measurement uncertainty.
ERIC Educational Resources Information Center
Saleh, Salmiza
2012-01-01
Teachers of science-based education in Malaysian secondary schools, especially those in the field of physics, often find their students facing huge difficulties in dealing with conceptual ideas in physics, resulting thus in a lack of interest towards the subject. The aim of this study was to assess the effectiveness of the Brain-Based Teaching…
NASA Astrophysics Data System (ADS)
Bilardello, D.
2014-12-01
Understanding depositional remanent magnetizations (DRMs) bears implications on interpreting paleomagnetic and paleointensity records extracted from sedimentary rocks. Laboratory deposition experiments have yielded DRMs with shallow remanent inclinations and revealed a field dependence of the magnetization (M), which is orders of magnitude lower than the saturation remanence. To investigate these observations further, experiments involving differently shaped particles were performed. Spherical particles confirmed the field dependence of both the inclination error and M and the fact that the DRM acquired experimentally is lower than saturation. A sediment concentration dependence of the inclination error was observed, indicating a dependance of the inclination error on the sediment load/burial depth or the sedimentation rate. Other outcome was the certainty that spherical particles alone can lead to substantial inclination shallowing. Numerical simulations of settling spherical particles indicated that DRM should be ~10 times lower than the saturation remanence and predicted that rolling of the grains on the sediment surface and particle interactions during settling can produce a substantial shallowing of the inclination and lowering of the remanence, bringing the simulations in close agreement to the experimental results. Experiments involving platy particles, instead allowed interesting comparisons and gave insight into the behavior of differently shaped particles, for instance yielding smaller amounts of shallowing than spheres, in contrast to general belief. Viewing DRM as an anisotropic process allows fitting the experimental results with tensors (kDRM). The ratios of kvertical over khorizontal are in good agreement to the ratios of M obtained in vertical over horizontal experimental fields, which should be equivalent to the widely used inclination shallowing factor f. Experimental results were highly repeatabile, however not always as repeatable for both M and inclination (direction) for both particle shapes, heighlighting that while a sediment might carry a stable remanent direction, it may not always be a particularily good paleointensity recorder.
NASA Astrophysics Data System (ADS)
Ebel, B. A.; Koch, J. C.; Walvoord, M. A.
2017-12-01
Boreal forest regions in interior Alaska, USA are subject to recurring wildfire disturbance and climate shifts. These "press" and "pulse" disturbances impact water, solute, carbon, and energy fluxes, with feedbacks and consequences that are not adequately characterized. The NASA Arctic Boreal Vulnerability Experiment (ABoVE) seeks to understand susceptibility to disturbance in boreal regions. Subsurface physical and hydraulic properties are among the largest uncertainties in cryohydrogeologic modeling aiming to predict impacts of disturbance in Arctic and boreal regions. We address this research gap by characterizing physical and hydraulic properties of soil across a gradient of sites covering disparate soil textures and wildfire disturbance in interior Alaska. Samples were collected in the field within the domain of the NASA ABoVE project and analyzed in the laboratory. Physical properties measured include soil organic matter fraction, soil-particle size distribution, dry bulk density, and saturated soil-water content. Hydraulic properties measured include soil-water retention and field-saturated hydraulic conductivity using tension infiltrometers (-1 cm applied pressure head). The physical and hydraulic properties provide the foundation for site conceptual model development, cryohydrogeologic model parameterization, and integration with geophysical data. This foundation contributes to the NASA ABoVE objectives of understanding the underlying physical processes that control vulnerability in Arctic and Boreal landscapes.
The electric field of a uniformly charged cubic shell
NASA Astrophysics Data System (ADS)
McCreery, Kaitlin; Greenside, Henry
2018-01-01
As an integrative and insightful example for undergraduates learning about electrostatics, we discuss how to use symmetry, Coulomb's law, superposition, Gauss's law, and visualization to understand the electric field E (x ,y ,z ) produced by a uniformly charged cubic shell. We first discuss how to deduce qualitatively, using freshman-level physics, the perhaps surprising fact that the interior electric field is nonzero and has a complex structure, pointing inwards from the middle of each face of the shell and pointing outwards towards each edge and corner. We then discuss how to understand the quantitative features of the electric field by plotting an analytical expression for E along symmetry lines and on symmetry surfaces of the shell.
Structure and structure-preserving algorithms for plasma physics
NASA Astrophysics Data System (ADS)
Morrison, P. J.
2016-10-01
Conventional simulation studies of plasma physics are based on numerically solving the underpinning differential (or integro-differential) equations. Usual algorithms in general do not preserve known geometric structure of the physical systems, such as the local energy-momentum conservation law, Casimir invariants, and the symplectic structure (Poincaré invariants). As a consequence, numerical errors may accumulate coherently with time and long-term simulation results may be unreliable. Recently, a series of geometric algorithms that preserve the geometric structures resulting from the Hamiltonian and action principle (HAP) form of theoretical models in plasma physics have been developed by several authors. The superiority of these geometric algorithms has been demonstrated with many test cases. For example, symplectic integrators for guiding-center dynamics have been constructed to preserve the noncanonical symplectic structures and bound the energy-momentum errors for all simulation time-steps; variational and symplectic algorithms have been discovered and successfully applied to the Vlasov-Maxwell system, MHD, and other magnetofluid equations as well. Hamiltonian truncations of the full Vlasov-Maxwell system have opened the field of discrete gyrokinetics and led to the GEMPIC algorithm. The vision that future numerical capabilities in plasma physics should be based on structure-preserving geometric algorithms will be presented. It will be argued that the geometric consequences of HAP form and resulting geometric algorithms suitable for plasma physics studies cannot be adapted from existing mathematical literature but, rather, need to be discovered and worked out by theoretical plasma physicists. The talk will review existing HAP structures of plasma physics for a variety of models, and how they have been adapted for numerical implementation. Supported by DOE DE-FG02-04ER-54742.
NASA Astrophysics Data System (ADS)
Louedec, Karim
2015-01-01
Astroparticle physics and cosmology allow us to scan the universe through multiple messengers. It is the combination of these probes that improves our understanding of the universe, both in its composition and its dynamics. Unlike other areas in science, research in astroparticle physics has a real originality in detection techniques, in infrastructure locations, and in the observed physical phenomenon that is not created directly by humans. It is these features that make the minimisation of statistical and systematic errors a perpetual challenge. In all these projects, the environment is turned into a detector medium or a target. The atmosphere is probably the environment component the most common in astroparticle physics and requires a continuous monitoring of its properties to minimise as much as possible the systematic uncertainties associated. This paper introduces the different atmospheric effects to take into account in astroparticle physics measurements and provides a non-exhaustive list of techniques and instruments to monitor the different elements composing the atmosphere. A discussion on the close link between astroparticle physics and Earth sciences ends this paper.
Error field detection in DIII-D by magnetic steering of locked modes
Shiraki, Daisuke; La Haye, Robert J.; Logan, Nikolas C.; ...
2014-02-20
Optimal correction coil currents for the n = 1 intrinsic error field of the DIII-D tokamak are inferred by applying a rotating external magnetic perturbation to steer the phase of a saturated locked mode with poloidal/toroidal mode number m/n = 2/1. The error field is detected non-disruptively in a single discharge, based on the toroidal torque balance of the resonant surface, which is assumed to be dominated by the balance of resonant electromagnetic torques. This is equivalent to the island being locked at all times to the resonant 2/1 component of the total of the applied and intrinsic error fields,more » such that the deviation of the locked mode phase from the applied field phase depends on the existing error field. The optimal set of correction coil currents is determined to be those currents which best cancels the torque from the error field, based on fitting of the torque balance model. The toroidal electromagnetic torques are calculated from experimental data using a simplified approach incorporating realistic DIII-D geometry, and including the effect of the plasma response on island torque balance based on the ideal plasma response to external fields. This method of error field detection is demonstrated in DIII-D discharges, and the results are compared with those based on the onset of low-density locked modes in ohmic plasmas. Furthermore, this magnetic steering technique presents an efficient approach to error field detection and is a promising method for ITER, particularly during initial operation when the lack of auxiliary heating systems makes established techniques based on rotation or plasma amplification unsuitable.« less
Kinetic Alfven wave explanation of the Hall signals in magnetic reconnection
NASA Astrophysics Data System (ADS)
Dai, L.; Wang, C.; Zhang, Y.; Duan, S.; Lavraud, B.; Burch, J. L.; Pollock, C.; Torbert, R. B.
2017-12-01
Magnetic reconnection is initiated in a small diffusion region but can drive global-scale dynamics in Earth's magnetosphere, solar flares, and astrophysical systems. Understanding the processes at work in the diffusion region remains a main challenge in space plasma physics. Recent in-situ observations from MMS and THEMIS reveal that the electric field normal to the reconnection current layer, often called the Hall electric field (En), is mainly balanced by the ion pressure gradient. Here we present theoretical explanations indicating that this observation fact is a manifestation of Kinetic Alfven Waves (KAW) physics. The ion pressure gradient represents the finite gyroradius effect of KAW, leading to ion intrusion across the magnetic field lines. Electrons stream along the magnetic field lines to track ions, resulting in field-aligned currents and the associated pattern of the out-of-plane Hall magnetic field (Bm). The ratio En/Bm is on the order of the Alfven speed, as predicted by the KAW theory. The KAW physics further provides new perspectives on how ion intrusion may trigger electric fields suitable for reconnection to occur.
Incoherent averaging of phase singularities in speckle-shearing interferometry.
Mantel, Klaus; Nercissian, Vanusch; Lindlein, Norbert
2014-08-01
Interferometric speckle techniques are plagued by the omnipresence of phase singularities, impairing the phase unwrapping process. To reduce the number of phase singularities by physical means, an incoherent averaging of multiple speckle fields may be applied. It turns out, however, that the results may strongly deviate from the expected √N behavior. Using speckle-shearing interferometry as an example, we investigate the mechanism behind the reduction of phase singularities, both by calculations and by computer simulations. Key to an understanding of the reduction mechanism during incoherent averaging is the representation of the physical averaging process in terms of certain vector fields associated with each speckle field.
NASA Astrophysics Data System (ADS)
Lemon, C.; Chen, M.; O'Brien, T. P.; Toffoletto, F.; Sazykin, S.; Wolf, R.; Kumar, V.
2006-12-01
We present simulation results of the Rice Convection Model-Equilibrium (RCM-E) that test and compare the effect on the storm time ring current of varying the plasma sheet source population characteristics at 6.6 Re during magnetic storms. Previous work has shown that direct injection of ionospheric plasma into the ring current is not a significant source of ring current plasma, suggesting that the plasma sheet is the only source. However, storm time processes in the plasma sheet and inner magnetosphere are very complex, due in large part to the feedback interactions between the plasma distribution, magnetic field, and electric field. We are particularly interested in understanding the role of the plasma sheet entropy parameter (PV^{5/3}, where V=\\int ds/B) in determining the strength and distribution of the ring current in both the main and recovery phases of a storm. Plasma temperature and density can be measured from geosynchrorous orbiting satellites, and these are often used to provide boundary conditions for ring current simulations. However, magnetic field measurements in this region are less commonly available, and there is a relatively poor understanding of the interplay between the plasma and the magnetic field during magnetic storms. The entropy parameter is a quantity that incorporates both the plasma and the magnetic field, and understanding its role in the ring current injection and recovery is essential to describing the processes that are occuring during magnetic storms. The RCM-E includes the physics of feedback between the plasma and both the electric and magnetic fields, and is therefore a valuable tool for understanding these complex storm-time processes. By contrasting the effects of different plasma boundary conditions at geosynchronous orbit, we shed light on the physical processes involved in ring current injection and recovery.
Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D
2015-10-08
Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.
NASA Astrophysics Data System (ADS)
Chegwidden, O.; Nijssen, B.; Pytlak, E.
2017-12-01
Any model simulation has errors, including errors in meteorological data, process understanding, model structure, and model parameters. These errors may express themselves as bias, timing lags, and differences in sensitivity between the model and the physical world. The evaluation and handling of these errors can greatly affect the legitimacy, validity and usefulness of the resulting scientific product. In this presentation we will discuss a case study of handling and communicating model errors during the development of a hydrologic climate change dataset for the Pacific Northwestern United States. The dataset was the result of a four-year collaboration between the University of Washington, Oregon State University, the Bonneville Power Administration, the United States Army Corps of Engineers and the Bureau of Reclamation. Along the way, the partnership facilitated the discovery of multiple systematic errors in the streamflow dataset. Through an iterative review process, some of those errors could be resolved. For the errors that remained, honest communication of the shortcomings promoted the dataset's legitimacy. Thoroughly explaining errors also improved ways in which the dataset would be used in follow-on impact studies. Finally, we will discuss the development of the "streamflow bias-correction" step often applied to climate change datasets that will be used in impact modeling contexts. We will describe the development of a series of bias-correction techniques through close collaboration among universities and stakeholders. Through that process, both universities and stakeholders learned about the others' expectations and workflows. This mutual learning process allowed for the development of methods that accommodated the stakeholders' specific engineering requirements. The iterative revision process also produced a functional and actionable dataset while preserving its scientific merit. We will describe how encountering earlier techniques' pitfalls allowed us to develop improved methods for scientists and practitioners alike.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Non-Markovian quantum Brownian motion in one dimension in electric fields
NASA Astrophysics Data System (ADS)
Shen, H. Z.; Su, S. L.; Zhou, Y. H.; Yi, X. X.
2018-04-01
Quantum Brownian motion is the random motion of quantum particles suspended in a field (or an effective field) resulting from their collision with fast-moving modes in the field. It provides us with a fundamental model to understand various physical features concerning open systems in chemistry, condensed-matter physics, biophysics, and optomechanics. In this paper, without either the Born-Markovian or rotating-wave approximation, we derive a master equation for a charged-Brownian particle in one dimension coupled with a thermal reservoir in electric fields. The effect of the reservoir and the electric fields is manifested as time-dependent coefficients and coherent terms, respectively, in the master equation. The two-photon correlation between the Brownian particle and the reservoir can induce nontrivial squeezing dynamics to the particle. We derive a current equation including the source from the driving fields, transient current from the system flowing into the environment, and the two-photon current caused by the non-rotating-wave term. The presented results then are compared with that given by the rotating-wave approximation in the weak-coupling limit, and these results are extended to a more general quantum network involving an arbitrary number of coupled-Brownian particles. The presented formalism might open a way to better understand exactly the non-Markovian quantum network.
Mathematics in chemistry: indeterminate forms and their meaning
NASA Astrophysics Data System (ADS)
Segurado, Manuel A. P.; Silva, Margarida F. B.; Castro, Rita
2011-07-01
The mathematical language and its tools are complementary to the formalism in chemistry, in particular at an advanced level. It is thus crucial, for its understanding, that students acquire a solid knowledge in Calculus and that they know how to apply it. The frequent occurrence of indeterminate forms in multiple areas, particularly in Physical Chemistry, justifies the need to properly understand the limiting process in such cases. This article emphasizes the importance of the L'Hôpital's rule as a practical tool, although often neglected, to obtain the more common indeterminate limits, through the use of some specific examples as the radioactive decay, spectrophotometric error, Planck's radiation law, second-order kinetics, or consecutive reactions.
The Aharonov-Bohm effect and Tonomura et al. experiments: Rigorous results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballesteros, Miguel; Weder, Ricardo
The Aharonov-Bohm effect is a fundamental issue in physics. It describes the physically important electromagnetic quantities in quantum mechanics. Its experimental verification constitutes a test of the theory of quantum mechanics itself. The remarkable experiments of Tonomura et al. ['Observation of Aharonov-Bohm effect by electron holography', Phys. Rev. Lett 48, 1443 (1982) and 'Evidence for Aharonov-Bohm effect with magnetic field completely shielded from electron wave', Phys. Rev. Lett 56, 792 (1986)] are widely considered as the only experimental evidence of the physical existence of the Aharonov-Bohm effect. Here we give the first rigorous proof that the classical ansatz of Aharonovmore » and Bohm of 1959 ['Significance of electromagnetic potentials in the quantum theory', Phys. Rev. 115, 485 (1959)], that was tested by Tonomura et al., is a good approximation to the exact solution to the Schroedinger equation. This also proves that the electron, that is, represented by the exact solution, is not accelerated, in agreement with the recent experiment of Caprez et al. in 2007 ['Macroscopic test of the Aharonov-Bohm effect', Phys. Rev. Lett. 99, 210401 (2007)], that shows that the results of the Tonomura et al. experiments can not be explained by the action of a force. Under the assumption that the incoming free electron is a Gaussian wave packet, we estimate the exact solution to the Schroedinger equation for all times. We provide a rigorous, quantitative error bound for the difference in norm between the exact solution and the Aharonov-Bohm Ansatz. Our bound is uniform in time. We also prove that on the Gaussian asymptotic state the scattering operator is given by a constant phase shift, up to a quantitative error bound that we provide. Our results show that for intermediate size electron wave packets, smaller than the ones used in the Tonomura et al. experiments, quantum mechanics predicts the results observed by Tonomura et al. with an error bound smaller than 10{sup -99}. It would be quite interesting to perform experiments with electron wave packets of intermediate size. Furthermore, we provide a physical interpretation of our error bound.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Elizabeth S.; Prosnitz, Robert G.; Yu Xiaoli
2006-11-15
Purpose: The aim of this study was to assess the impact of patient-specific factors, left ventricle (LV) volume, and treatment set-up errors on the rate of perfusion defects 6 to 60 months post-radiation therapy (RT) in patients receiving tangential RT for left-sided breast cancer. Methods and Materials: Between 1998 and 2005, a total of 153 patients were enrolled onto an institutional review board-approved prospective study and had pre- and serial post-RT (6-60 months) cardiac perfusion scans to assess for perfusion defects. Of the patients, 108 had normal pre-RT perfusion scans and available follow-up data. The impact of patient-specific factors onmore » the rate of perfusion defects was assessed at various time points using univariate and multivariate analysis. The impact of set-up errors on the rate of perfusion defects was also analyzed using a one-tailed Fisher's Exact test. Results: Consistent with our prior results, the volume of LV in the RT field was the most significant predictor of perfusion defects on both univariate (p = 0.0005 to 0.0058) and multivariate analysis (p = 0.0026 to 0.0029). Body mass index (BMI) was the only significant patient-specific factor on both univariate (p = 0.0005 to 0.022) and multivariate analysis (p = 0.0091 to 0.05). In patients with very small volumes of LV in the planned RT fields, the rate of perfusion defects was significantly higher when the fields set-up 'too deep' (83% vs. 30%, p = 0.059). The frequency of deep set-up errors was significantly higher among patients with BMI {>=}25 kg/m{sup 2} compared with patients of normal weight (47% vs. 28%, p = 0.068). Conclusions: BMI {>=}25 kg/m{sup 2} may be a significant risk factor for cardiac toxicity after RT for left-sided breast cancer, possibly because of more frequent deep set-up errors resulting in the inclusion of additional heart in the RT fields. Further study is necessary to better understand the impact of patient-specific factors and set-up errors on the development of RT-induced perfusion defects.« less
Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leake, James E.; Linton, Mark G.; Schuck, Peter W., E-mail: james.e.leake@nasa.gov
Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the development of coronal models which are “data-driven” at the photosphere. We present an investigation to determine the feasibility and accuracy of such methods. Our validation framework uses a simulation of active region (AR) formation, modeling the emergence of magnetic flux from the convection zone to the corona, as a ground-truth data set, to supply both the photospheric information and to perform the validation of the data-driven method. We focus ourmore » investigation on how the accuracy of the data-driven model depends on the temporal frequency of the driving data. The Helioseismic and Magnetic Imager on NASA’s Solar Dynamics Observatory produces full-disk vector magnetic field measurements at a 12-minute cadence. Using our framework we show that ARs that emerge over 25 hr can be modeled by the data-driving method with only ∼1% error in the free magnetic energy, assuming the photospheric information is specified every 12 minutes. However, for rapidly evolving features, under-sampling of the dynamics at this cadence leads to a strobe effect, generating large electric currents and incorrect coronal morphology and energies. We derive a sampling condition for the driving cadence based on the evolution of these small-scale features, and show that higher-cadence driving can lead to acceptable errors. Future work will investigate the source of errors associated with deriving plasma variables from the photospheric magnetograms as well as other sources of errors, such as reduced resolution, instrument bias, and noise.« less
NASA Astrophysics Data System (ADS)
Avino, Fabio; Bovet, Alexandre; Fasoli, Ambrogio; Furno, Ivo; Gustafson, Kyle; Loizu, Joaquim; Ricci, Paolo; Theiler, Christian
2012-10-01
TORPEX is a basic plasma physics toroidal device located at the CRPP-EPFL in Lausanne. In TORPEX, a vertical magnetic field superposed on a toroidal field creates helicoidal field lines with both ends terminating on the torus vessel. We review recent advances in the understanding and control of electrostatic interchange turbulence, associated structures and their effect on suprathermal ions. These advances are obtained using high-resolution diagnostics of plasma parameters and wave fields throughout the whole device cross-section, fluid models and numerical simulations. Furthermore, we discuss future developments including the possibility of generating closed field line configurations with rotational transform using an internal toroidal wire carrying a current. This system will also allow the study of innovative fusion-relevant configurations, such as the snowflake divertor.
Promoting Physical Understanding through Peer Mentoring
NASA Astrophysics Data System (ADS)
Nossal, S. M.; Huesmann, A.; Hooper, E.; Moore, C.; Watson, L.; Trestrail, A.; Weber, J.; Timbie, P.; Jacob, A.
2015-12-01
The Physics Learning Center at the University of Wisconsin-Madison provides a supportive learning community for students studying introductory physics, as well as teaching and leadership experience for undergraduate Peer Mentor Tutors who receive extensive training and supervision. Many of our Peer Tutors were former Physics Learning Center participants. A central goal of the Physics Learning Center is to address achievement/equity gaps (e.g. race, gender, socio-economic status, disability, age, transfer status, etc.) for undergraduate students pursuing majors and coursework in STEM fields. Students meet twice a week in small learning teams of 3-8 students, facilitated by a trained Peer Mentor Tutor or staff member. These active learning teams focus on discussing core physical concepts and practicing problem-solving. The weekly training of the tutors addresses both teaching and mentoring issues in science education such as helping students to build confidence, strategies for assessing student understanding, and fostering a growth mindset. A second weekly training meeting addresses common misconceptions and strategies for teaching specific physics topics. For non-science majors we have a small Peer Mentor Tutor program for Physics in the Arts. We will discuss the Physics Learning Center's approaches to promoting inclusion, understanding, and confidence for both our participants and Peer Mentor Tutors, as well as examples from the geosciences that can be used to illustrate introductory physics concepts.
NASA Astrophysics Data System (ADS)
Busurin, V. I.; Brazhnikova, T. Yu; Korobkov, V. V.; Prokhorov, N. I.
1995-10-01
An analysis is made of a general basic configuration and of the transfer function of a fibre-optic transducer based on controlled coupling in a multilayer two-channel coaxial optical fibre. The influence of the structure parameters and of external factors on the errors of a sensitive element in such a transducer is considered. The results are given of an investigation of the characteristics of a number of transducers constructed in accordance with the basic configuration.
Constitutive parameter measurements of lossy materials
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.
1989-01-01
The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.
Comments on the Voigt function implementation in the Astropy and SpectraPlot.com packages
NASA Astrophysics Data System (ADS)
Schreier, Franz
2018-07-01
The Voigt profile is important for spectroscopy, astrophysics, and many other fields of physics, but is notoriously difficult to compute. McLean et al. [1] [J. Electron Spectrosc. & Relat. Phenom., 1994] have proposed an approximation using a sum of Lorentzians. Our assessment indicates that this algorithm has significant errors for small arguments. After a brief survey of the requirements for spectroscopy we give a short list of both efficient and accurate codes and recommend implementations based on rational approximations.
Computational Physics' Greatest Hits
NASA Astrophysics Data System (ADS)
Bug, Amy
2011-03-01
The digital computer, has worked its way so effectively into our profession that now, roughly 65 years after its invention, it is virtually impossible to find a field of experimental or theoretical physics unaided by computational innovation. It is tough to think of another device about which one can make that claim. In the session ``What is computational physics?'' speakers will distinguish computation within the field of computational physics from this ubiquitous importance across all subfields of physics. This talk will recap the invited session ``Great Advances...Past, Present and Future'' in which five dramatic areas of discovery (five of our ``greatest hits'') are chronicled: The physics of many-boson systems via Path Integral Monte Carlo, the thermodynamic behavior of a huge number of diverse systems via Monte Carlo Methods, the discovery of new pharmaceutical agents via molecular dynamics, predictive simulations of global climate change via detailed, cross-disciplinary earth system models, and an understanding of the formation of the first structures in our universe via galaxy formation simulations. The talk will also identify ``greatest hits'' in our field from the teaching and research perspectives of other members of DCOMP, including its Executive Committee.
Field errors in hybrid insertion devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlueter, R.D.
1995-02-01
Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.
How Robotics Programs Influence Young Women's Career Choices: A Grounded Theory Model
ERIC Educational Resources Information Center
Craig, Cecilia Dosh-Bluhm
2014-01-01
The fields of engineering, computer science, and physics have a paucity of women despite decades of intervention by universities and organizations. Women's graduation rates in these fields continue to stagnate, posing a critical problem for society. This qualitative grounded theory (GT) study sought to understand how robotics programs influenced…
Coral: A Hawaiian Resource. An Instructional Guidebook for Teachers.
ERIC Educational Resources Information Center
Fielding, Ann; Moniz, Barbara
Described are eight field trips to various sites on the Hawaiian island of Oahu. These experiences are designed to help teachers develop middle school students' awareness and understanding of Hawaii's natural resources, with particular emphasis upon coral. Each field trip unit contains a physical and biological description of the area and two to…
Understanding EFL Students' Errors in Writing
ERIC Educational Resources Information Center
Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti
2015-01-01
Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…
An error reduction algorithm to improve lidar turbulence estimates for wind energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less
An error reduction algorithm to improve lidar turbulence estimates for wind energy
Newman, Jennifer F.; Clifton, Andrew
2017-02-10
Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less
Physical Processes and Real-Time Chemical Measurement of the Insect Olfactory Environment
Abrell, Leif; Hildebrand, John G.
2009-01-01
Odor-mediated insect navigation in airborne chemical plumes is vital to many ecological interactions, including mate finding, flower nectaring, and host locating (where disease transmission or herbivory may begin). After emission, volatile chemicals become rapidly mixed and diluted through physical processes that create a dynamic olfactory environment. This review examines those physical processes and some of the analytical technologies available to characterize those behavior-inducing chemical signals at temporal scales equivalent to the olfactory processing in insects. In particular, we focus on two areas of research that together may further our understanding of olfactory signal dynamics and its processing and perception by insects. First, measurement of physical atmospheric processes in the field can provide insight into the spatiotemporal dynamics of the odor signal available to insects. Field measurements in turn permit aspects of the physical environment to be simulated in the laboratory, thereby allowing careful investigation into the links between odor signal dynamics and insect behavior. Second, emerging analytical technologies with high recording frequencies and field-friendly inlet systems may offer new opportunities to characterize natural odors at spatiotemporal scales relevant to insect perception and behavior. Characterization of the chemical signal environment allows the determination of when and where olfactory-mediated behaviors may control ecological interactions. Finally, we argue that coupling of these two research areas will foster increased understanding of the physicochemical environment and enable researchers to determine how olfactory environments shape insect behaviors and sensory systems. PMID:18548311
NASA Astrophysics Data System (ADS)
Marco, F. J.; Martínez, M. J.; López, J. A.
2015-04-01
The high quality of Hipparcos data in position, proper motion, and parallax has allowed for studies about stellar kinematics with the aim of achieving a better physical understanding of our galaxy, based on accurate calculus of the Ogorodnikov-Milne model (OMM) parameters. The use of discrete least squares is the most common adjustment method, but it may lead to errors mainly because of the inhomogeneous spatial distribution of the data. We present an example of the instability of this method using the case of a function given by a linear combination of Legendre polynomials. These polynomials are basic in the use of vector spherical harmonics, which have been used to compute the OMM parameters by several authors, such as Makarov & Murphy, Mignard & Klioner, and Vityazev & Tsvetkov. To overcome the former problem, we propose the use of a mixed method (see Marco et al.) that includes the extension of the functions of residuals to any point on the celestial sphere. The goal is to be able to work with continuous variables in the calculation of the coefficients of the vector spherical harmonic developments with stability and efficiency. We apply this mixed procedure to the study of the kinematics of the stars in our Galaxy, employing the Hipparcos velocity field data to obtain the OMM parameters. Previously, we tested the method by perturbing the Vectorial Spherical Harmonics model as well as the velocity vector field.
The Storage Ring Proton EDM Experiment
NASA Astrophysics Data System (ADS)
Semertzidis, Yannis; Storage Ring Proton EDM Collaboration
2014-09-01
The storage ring pEDM experiment utilizes an all-electric storage ring to store ~1011 longitudinally polarized protons simultaneously in clock-wise and counter-clock-wise directions for 103 seconds. The radial E-field acts on the proton EDM for the duration of the storage time to precess its spin in the vertical plane. The ring lattice is optimized to reduce intra-beam scattering, increase the statistical sensitivity and reduce the systematic errors of the method. The main systematic error is a net radial B-field integrated around the ring causing an EDM-like vertical spin precession. The counter-rotating beams sense this integrated field and are vertically shifted by an amount, which depends on the strength of the vertical focusing in the ring, thus creating a radial B-field. Modulating the vertical focusing at 10 kHz makes possible the detection of this radial B-field by a SQUID-magnetometer (SQUID-based BPM). For a total number of n SQUID-based BPMs distributed around the ring the effectiveness of the method is limited to the N = n /2 harmonic of the background radial B-field due to the Nyquist sampling theorem limit. This limitation establishes the requirement to reduce the maximum radial B-field to 0.1-1 nT everywhere around the ring by layers of mu-metal and aluminum vacuum tube. The metho's sensitivity is 10-29 e .cm , more than three orders of magnitude better than the present neutron EDM experimental limit, making it sensitive to SUSY-like new physics mass scale up to 300 TeV.
Fault-tolerant quantum error detection
Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher
2017-01-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889
Wong, Aaron L; Shelhamer, Mark
2014-05-01
Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.
2013-01-01
Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
Qualitative investigation into students' use of divergence and curl in electromagnetism
NASA Astrophysics Data System (ADS)
Bollen, Laurens; van Kampen, Paul; Baily, Charles; De Cock, Mieke
2016-12-01
Many students struggle with the use of mathematics in physics courses. Although typically well trained in rote mathematical calculation, they often lack the ability to apply their acquired skills to physical contexts. Such student difficulties are particularly apparent in undergraduate electrodynamics, which relies heavily on the use of vector calculus. To gain insight into student reasoning when solving problems involving divergence and curl, we conducted eight semistructured individual student interviews. During these interviews, students discussed the divergence and curl of electromagnetic fields using graphical representations, mathematical calculations, and the differential form of Maxwell's equations. We observed that while many students attempt to clarify the problem by making a sketch of the electromagnetic field, they struggle to interpret graphical representations of vector fields in terms of divergence and curl. In addition, some students confuse the characteristics of field line diagrams and field vector plots. By interpreting our results within the conceptual blending framework, we show how a lack of conceptual understanding of the vector operators and difficulties with graphical representations can account for an improper understanding of Maxwell's equations in differential form. Consequently, specific learning materials based on a multiple representation approach are required to clarify Maxwell's equations.
NASA Astrophysics Data System (ADS)
Mercer, Jason J.; Westbrook, Cherie J.
2016-11-01
Microform is important in understanding wetland functions and processes. But collecting imagery of and mapping the physical structure of peatlands is often expensive and requires specialized equipment. We assessed the utility of coupling computer vision-based structure from motion with multiview stereo photogrammetry (SfM-MVS) and ground-based photos to map peatland topography. The SfM-MVS technique was tested on an alpine peatland in Banff National Park, Canada, and guidance was provided on minimizing errors. We found that coupling SfM-MVS with ground-based photos taken with a point and shoot camera is a viable and competitive technique for generating ultrahigh-resolution elevations (i.e., <0.01 m, mean absolute error of 0.083 m). In evaluating 100+ viable SfM-MVS data collection and processing scenarios, vegetation was found to considerably influence accuracy. Vegetation class, when accounted for, reduced absolute error by as much as 50%. The logistic flexibility of ground-based SfM-MVS paired with its high resolution, low error, and low cost makes it a research area worth developing as well as a useful addition to the wetland scientists' toolkit.
How Do They Get Here?: Paths into Physics Education Research
ERIC Educational Resources Information Center
Barthelemy, Ramon S.; Henderson, Charles; Grunert, Megan L.
2013-01-01
Physics education research (PER) is a relatively new and rapidly growing area of Ph.D. specialization. To sustain the field of PER, a steady pipeline of talented scholars needs to be developed and supported. One aspect of building this pipeline is understanding how students come to graduate and postdoctoral work in PER and what their career goals…
An image-based approach to understanding the physics of MR artifacts.
Morelli, John N; Runge, Val M; Ai, Fei; Attenberger, Ulrike; Vu, Lan; Schmeets, Stuart H; Nitz, Wolfgang R; Kirsch, John E
2011-01-01
As clinical magnetic resonance (MR) imaging becomes more versatile and more complex, it is increasingly difficult to develop and maintain a thorough understanding of the physical principles that govern the changing technology. This is particularly true for practicing radiologists, whose primary obligation is to interpret clinical images and not necessarily to understand complex equations describing the underlying physics. Nevertheless, the physics of MR imaging plays an important role in clinical practice because it determines image quality, and suboptimal image quality may hinder accurate diagnosis. This article provides an image-based explanation of the physics underlying common MR imaging artifacts, offering simple solutions for remedying each type of artifact. Solutions that have emerged from recent technologic advances with which radiologists may not yet be familiar are described in detail. Types of artifacts discussed include those resulting from voluntary and involuntary patient motion, magnetic susceptibility, magnetic field inhomogeneities, gradient nonlinearity, standing waves, aliasing, chemical shift, and signal truncation. With an improved awareness and understanding of these artifacts, radiologists will be better able to modify MR imaging protocols so as to optimize clinical image quality, allowing greater confidence in diagnosis. Copyright © RSNA, 2011.
NASA Astrophysics Data System (ADS)
Penn, C. A.; Clow, D. W.; Sexstone, G. A.
2017-12-01
Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a spatially-distributed physics-based snow model was used to assess possible effects of land cover change on snowpack properties. Trends in forecasted error are variable while baseline model results show a consistent under-prediction in the recent decade, highlighting possible compounding effects of climate and land cover changes.
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.
2005-01-01
The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.
2002-01-01
The tropics and extratropics are two dynamically distinct regimes. The coupling between these two regimes often defies simple analytical treatment. Progress in understanding of the dynamical interaction between the tropics and extratropics relies on better observational descriptions to guide theoretical development. However, global analyses currently contain significant errors in primary hydrological variables such as precipitation, evaporation, moisture, and clouds, especially in the tropics. Tropical analyses have been shown to be sensitive to parameterized precipitation processes, which are less than perfect, leading to order-one discrepancies between estimates produced by different data assimilation systems. One strategy for improvement is to assimilate rainfall observations to constrain the analysis and reduce uncertainties in variables physically linked to precipitation. At the Data Assimilation Office at the NASA Goddard Space Flight Center, we have been exploring the use of tropical rain rates derived from the TRMM Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments in global data assimilation. Results show that assimilating these data improves not only rainfall and moisture fields but also related climate parameters such as clouds and radiation, as well as the large-scale circulation and short-range forecasts. These studies suggest that assimilation of microwave rainfall observations from space has the potential to significantly improve the quality of 4-D assimilated datasets for climate investigations (Hou et al. 2001). In the next few years, there will be a gradual increase in microwave rain products available from operational and research satellites, culminating to a target constellation of 9 satellites to provide global rain measurements every 3 hours with the proposed Global Precipitation Measurement (GPM) mission in 2007. Continued improvements in assimilation methodology, rainfall error estimates, and model parameterizations are needed to ensure that we derive maximum benefits from these observations.
1984-06-01
space discretization error . 1. I 3 1. INTRODUCTION Reaction- diffusion processes occur in many branches of biology and physical chemistry. Examples...to model reaction- diffusion phenomena. The primary goal of this adaptive method is to keep a particular norm of the space discretization error less...AD-A142 253 AN ADAPTIVE MET6 OFD LNES WITH ERROR CONTROL FOR 1 INST FOR PHYSICAL SCIENCE AND TECH. I BABUSKAAAO C7 EA OH S UMR AN UNVC EEP R
NASA Astrophysics Data System (ADS)
Said, Asma
Despite the advances made in various fields, women are still considered as minorities in the fields of science and mathematics. There is a gender gap regarding women's participation and achievement in physics. Self-efficacy and attitudes and beliefs toward physics have been identified as predictors of students' performance on conceptual surveys in physics courses. The present study, which used two-way analysis of variance and multiple linear regression analyses at a community college in California, revealed there is no gender gap in achievement between male and female students in physics courses. Furthermore, there is an achievement gap between students who are enrolled in algebra-based and calculus-based physics courses. The findings indicate that attitudes and beliefs scores can be used as predictors of students' performance on conceptual surveys in physics courses. However, scores of self-efficacy cannot be used as predictors of students' performance on conceptual surveys in physics courses.
Bound States and Field-Polarized Haldane Modes in a Quantum Spin Ladder.
Ward, S; Mena, M; Bouillot, P; Kollath, C; Giamarchi, T; Schmidt, K P; Normand, B; Krämer, K W; Biner, D; Bewley, R; Guidi, T; Boehm, M; McMorrow, D F; Rüegg, Ch
2017-04-28
The challenge of one-dimensional systems is to understand their physics beyond the level of known elementary excitations. By high-resolution neutron spectroscopy in a quantum spin-ladder material, we probe the leading multiparticle excitation by characterizing the two-magnon bound state at zero field. By applying high magnetic fields, we create and select the singlet (longitudinal) and triplet (transverse) excitations of the fully spin-polarized ladder, which have not been observed previously and are close analogs of the modes anticipated in a polarized Haldane chain. Theoretical modeling of the dynamical response demonstrates our complete quantitative understanding of these states.
Alignment of a vector magnetometer to an optical prism
NASA Astrophysics Data System (ADS)
Dietrich, M. R.; Bailey, K. G.; O'Connor, T. P.
2017-05-01
A method for alignment of a vector magnetometer to a rigidly attached prism is presented. This enables optical comparison of the magnetometer axes to physical surfaces in the apparatus, and thus an absolute determination of the magnetic field direction in space. This is in contrast with more common techniques, which focus on precise determination of the relative angles between magnetometer axes, and so are more suited to measuring differences in the direction of magnetic fields. Here we demonstrate precision better than 500 μrad on a fluxgate magnetometer, which also gives the coil orthogonality errors to a similar precision. The relative sensitivity of the three axes is also determined, with a precision of about 5 ×10 -4 .
Child maltreatment: a review of key literature in 2015.
Newton, Alice W
2016-06-01
This review addresses some of the more salient articles in the field of child maltreatment published in 2015, with a goal of helping the general practitioner understand the evolution of research in the field of child abuse pediatrics (a board-certified specialty since 2009). Researchers continue to refine the database for child abuse pediatrics. Several articles focus on the inconsistencies in approach to the evaluation of possible physical child abuse between hospitals and practitioners. Multiple researchers aim to develop a protocol that standardizes the response to findings of a sentinel injury, such as a rib fracture, abdominal trauma, or unexplained bruising in a nonambulatory infant. Professionals are also working to improve our understanding about the impact of trauma on children and how best to ameliorate its effects. With solid, evidence-based literature published on various topics in the field of child abuse pediatrics, experts work to refine and unify the clinician's approach to the evaluation of possible physical abuse.
Wells, Gary L
2008-02-01
The Illinois pilot program on lineup procedures has helped sharpen the focus on the types of controls that are needed in eyewitness field experiments and the limits that exist for interpreting outcome measures (rates of suspect and filler identifications). A widely-known limitation of field experiments is that, unlike simulated crime experiments, the guilt or innocence of the suspects is not easily known independently of the behavior of the eyewitnesses. Less well appreciated is that the rate of identification of lineup fillers, although clearly errors, can be a misleading measure if the filler identification rate is used to assess which of two or more lineup procedures is the better procedure. Several examples are used to illustrate that there are clearly improper procedures that would yield fewer identifications of fillers than would their proper counterparts. For example, biased lineup structure (e.g., using poorly matched fillers) as well as suggestive lineup procedures (that can result from non-blind administration of lineups) would reduce filler identification errors compared to unbiased and non-suggestive procedures. Hence, under many circumstances filler identification rates can be misleading indicators of preferred methods. Comparisons of lineup procedures in future field experiments will not be easily accepted in the absence of double-blind administration methods in all conditions plus true random assignment to conditions.
NASA Astrophysics Data System (ADS)
Utami, D. N.; Wulandari, H. R. T.
2016-11-01
The aim of this research is to detect misconceptions in the concept of physics at high school level by using astronomy questions as a testing instrument. Misconception is defined as a thought or an idea that is different from what has been agreed by experts who are reliable in the field, and it is believed to interfere with the acquisition of new understanding and integration of new knowledge or skills. While lack of concept or knowledge can be corrected with the next instruction and learning, students who have misconceptions have to “unlearn” their misconception before learning a correct one. Therefore, the ability to differentiate between these two things becomes crucial. CRI is one of the methods that can identify efficiently, between misconceptions and lack of knowledge that occur in the students. This research used quantitative- descriptive method with ex-post-facto research approach. An instrument used for the test is astronomy questions that require an understanding of physics concepts to solve the problem. By using astronomy questions, it is expected to raise a better understanding such that a concept can be viewed from various fields of science. Based on test results, misconceptions are found on several topics of physics. This test also revealed that student's ability to analyse a problem is still quite low.
NASA Astrophysics Data System (ADS)
Venema, Liesbeth; Verberck, Bart; Georgescu, Iulia; Prando, Giacomo; Couderc, Elsa; Milana, Silvia; Maragkou, Maria; Persechini, Lina; Pacchioni, Giulia; Fleet, Luke
2016-12-01
Quasiparticles are an extremely useful concept that provides a more intuitive understanding of complex phenomena in many-body physics. As such, they appear in various contexts, linking ideas across different fields and supplying a common language.
The Human Mind As General Problem Solver
NASA Astrophysics Data System (ADS)
Gurr, Henry
2011-10-01
Since leaving U Cal Irvine Neutrino Research, I have been a University Physics Teacher, and an Informal Researcher Of Human Functionality. My talk will share what I discovered about the best ways to learn, many of which are regularities that are to be expected from the Neuronal Network Properties announced in the publications of physicist John Joseph Hopfield. Hopfield's Model of mammalian brain-body, provides solid instructive understanding of how best Learn, Solve Problems, Live! With it we understand many otherwise puzzling features of our intellect! Examples Why 1) Analogies and metaphors powerful in class instruction, ditto poems. 2) Best learning done in physical (Hands-On) situations with tight immediate dynamical feedback such as seen in learning to ride bike, drive car, speak language, etc. 3) Some of the best learning happens in seeming random exploration, bump around, trial and error. 4) Scientific discoveries happen, with no apparent effort, at odd moments. 5) Important discoveries DEPEND on considerable frustrating effort, then Flash of Insight AHA EURIKA.
Hallbeck, M Susan; Koneczny, Sonja; Smith, Justine
2009-01-01
Controls for most technologies, including medical devices, are becoming increasingly complex, difficult to intuitively understand and don't necessarily follow population stereotypes. The resulting delays and errors are unacceptable when seconds can mean the difference between life and death. In this study participants were asked to "control" a system using a paper prototype (color photographs of controls) and then with a higher fidelity prototype of the same physical controls to determine performance differences among ethnicities and genders. No ethnic nor gender differences were found, and the comparison of paper versus higher fidelity prototypes also showed no significant differences. Thus, paper prototypes can be employed as an early device design usability tool to illustrate stereotype violations long before the first physical prototype. This will not only save money in the development and design processes, but also makes sure that even the most complex devices are intuitively understandable and operable for their basic functions.
Learning to Predict and Control the Physics of Our Movements
2017-01-01
When we hold an object in our hand, the mass of the object alters the physics of our arm, changing the relationship between motor commands that our brain sends to our arm muscles and the resulting motion of our hand. If the object is unfamiliar to us, our first movement will exhibit an error, producing a trajectory that is different from the one we had intended. This experience of error initiates learning in our brain, making it so that on the very next attempt our motor commands partially compensate for the unfamiliar physics, resulting in smaller errors. With further practice, the compensation becomes more complete, and our brain forms a model that predicts the physics of the object. This model is a motor memory that frees us from having to relearn the physics the next time that we encounter the object. The mechanism by which the brain transforms sensory prediction errors into corrective motor commands is the basis for how we learn the physics of objects with which we interact. The cerebellum and the motor cortex appear to be critical for our ability to learn physics, allowing us to use tools that extend our capabilities, making us masters of our environment. PMID:28202784
PREFACE: Focus section on Hadronic Physics
NASA Astrophysics Data System (ADS)
Roberts, Craig; Swanson, Eric
2007-07-01
Hadronic physics is the study of strongly interacting matter and its underlying theory, Quantum Chromodynamics (QCD). The field had its beginnings after World War Two, when hadrons were discovered in ever increasing numbers. Today, it encompasses topics like the quark-gluon structure of hadrons at varying scales, the quark-gluon plasma and hadronic matter at extreme temperature and density; it also underpins nuclear physics and has significant impact on particle physics, astrophysics, and cosmology. Among the goals of hadronic physics are to determine the parameters of QCD, understand the origin and characteristics of confinement, understand the dynamics and consequences of dynamical chiral symmetry breaking, explore the role of quarks and gluons in nuclei and in matter under extreme conditions and understand the quark and gluon structure of hadrons. In general, the process is one of discerning the relevant degrees of freedom and relating these to the fundamental fields of QCD. The emphasis is on understanding QCD, rather than testing it. The papers gathered in this special focus section of Journal of Physics G: Nuclear and Particle Physics attempt to cover this broad range of subjects. Alkofer and Greensite examine the issue of quark and gluon confinement with the focus on models of the QCD vacuum, lattice gauge theory investigations, and the relationship to the AdS/CFT correspondence postulate. Arrington et al. review nucleon form factors and their role in determining quark orbital momentum, the strangeness content of the nucleon, meson cloud effects, and the transition from nonperturbative to perturbative QCD dynamics. The physics associated with hadronic matter at high temperature and density and at low Bjorken-x at the Relativistic Heavy Ion Collider (RHIC), the SPS at CERN, and at the future LHC is summarized by d'Enterria. The article of Lee and Smith examines experiment and theory associated with electromagnetic meson production from nucleons and illustrates how the structure of the nucleon is revealed. Reimer reviews how the Drell--Yan process can be used to explore the sea quark structure of nucleons, thereby probing such phenomena as flavour asymmetry in the nucleon and nuclear medium modification of nucleon properties. The exploitation of the B factories has led to a resurgence of interest in heavy quark spectroscopy. Concurrently, interest in light quark spectroscopy and gluonic excitations remains high, with several new experimental efforts in the planning or building stages. The current status of all of this is reviewed by Rosner. Finally, Vogelsang summarizes the status of polarized deep inelastic lepton-nucleon scattering experiments at RHIC and their impact on the theoretical understanding of nucleon helicity structure, gluon polarization in the nucleus, and transverse spin asymmetries. Of course, hadronic physics is a much broader subject than can be conveyed in this special focus section; advances in effective field theory, lattice gauge theory, generalised parton distributions and many other subfields are not covered here. Nevertheless, we hope that this focus section will help the reader appreciate the vitality, breadth of endeavour, and the phenomenological richness of hadronic physics.
Ideograms for Physics and Chemistry
NASA Astrophysics Data System (ADS)
García Risueño, Pablo; Syropoulos, Apostolos; Vergés, Natàlia
2016-12-01
Ideograms (symbols that represent a word or idea) have great communicative value. They refer to concepts in a simple manner, easing the understanding of related ideas. Moreover, ideograms can simplify the often cumbersome notation used in the fields of Physics and physical Chemistry. Nonetheless only a few ideograms- like and - have been defined to date. In this work we propose that the scientific community follows the example of Mathematics—as well as that of oriental languages—and bestows a more important role upon ideograms. To support this thesis we propose ideograms for essential concepts in Physics and Chemistry. They are designed to be intuitive, and their goal is to make equations easier to read and understand. Our symbols are included in a publicly available [InlineEquation not available: see fulltext.]package ( svrsymbols).
ERIC Educational Resources Information Center
Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley
2016-01-01
Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…
Mathematical and field analysis of longitudinal reservoir infill
NASA Astrophysics Data System (ADS)
Ke, W. T.; Capart, H.
2016-12-01
In reservoirs, severe problems are caused by infilled sediment deposits. In long term, the sediment accumulation reduces the capacity of reservoir storage and flood control benefits. In the short term, the sediment deposits influence the intakes of water-supply and hydroelectricity generation. For the management of reservoir, it is important to understand the deposition process and then to predict the sedimentation in reservoir. To investigate the behaviors of sediment deposits, we propose a one-dimensional simplified theory derived by the Exner equation to predict the longitudinal sedimentation distribution in idealized reservoirs. The theory models the reservoir infill geomorphic actions for three scenarios: delta progradation, near-dam bottom deposition, and final infill. These yield three kinds of self-similar analytical solutions for the reservoir bed profiles, under different boundary conditions. Three analytical solutions are composed by error function, complementary error function, and imaginary error function, respectively. The theory is also computed by finite volume method to test the analytical solutions. The theoretical and numerical predictions are in good agreement with one-dimensional small-scale laboratory experiment. As the theory is simple to apply with analytical solutions and numerical computation, we propose some applications to simulate the long-profile evolution of field reservoirs and focus on the infill sediment deposit volume resulting the uplift of near-dam bottom elevation. These field reservoirs introduced here are Wushe Reservoir, Tsengwen Reservoir, Mudan Reservoir in Taiwan, Lago Dos Bocas in Puerto Rico, and Sakuma Dam in Japan.
Live cell refractometry using Hilbert phase microscopy and confocal reflectance microscopy.
Lue, Niyom; Choi, Wonshik; Popescu, Gabriel; Yaqoob, Zahid; Badizadegan, Kamran; Dasari, Ramachandra R; Feld, Michael S
2009-11-26
Quantitative chemical analysis has served as a useful tool for understanding cellular metabolisms in biology. Among many physical properties used in chemical analysis, refractive index in particular has provided molecular concentration that is an important indicator for biological activities. In this report, we present a method of extracting full-field refractive index maps of live cells in their native states. We first record full-field optical thickness maps of living cells by Hilbert phase microscopy and then acquire physical thickness maps of the same cells using a custom-built confocal reflectance microscope. Full-field and axially averaged refractive index maps are acquired from the ratio of optical thickness to physical thickness. The accuracy of the axially averaged index measurement is 0.002. This approach can provide novel biological assays of label-free living cells in situ.
Live Cell Refractometry Using Hilbert Phase Microscopy and Confocal Reflectance Microscopy†
Lue, Niyom; Choi, Wonshik; Popescu, Gabriel; Yaqoob, Zahid; Badizadegan, Kamran; Dasari, Ramachandra R.; Feld, Michael S.
2010-01-01
Quantitative chemical analysis has served as a useful tool for understanding cellular metabolisms in biology. Among many physical properties used in chemical analysis, refractive index in particular has provided molecular concentration that is an important indicator for biological activities. In this report, we present a method of extracting full-field refractive index maps of live cells in their native states. We first record full-field optical thickness maps of living cells by Hilbert phase microscopy and then acquire physical thickness maps of the same cells using a custom-built confocal reflectance microscope. Full-field and axially averaged refractive index maps are acquired from the ratio of optical thickness to physical thickness. The accuracy of the axially averaged index measurement is 0.002. This approach can provide novel biological assays of label-free living cells in situ. PMID:19803506
NASA Astrophysics Data System (ADS)
Park, Kiwan
2017-12-01
In our conventional understanding, large-scale magnetic fields are thought to originate from an inverse cascade in the presence of magnetic helicity, differential rotation or a magneto-rotational instability. However, as recent simulations have given strong indications that an inverse cascade (transfer) may occur even in the absence of magnetic helicity, the physical origin of this inverse cascade is still not fully understood. We here present two simulations of freely decaying helical and non-helical magnetohydrodynamic (MHD) turbulence. We verified the inverse transfer of helical and non-helical magnetic fields in both cases, but we found the underlying physical principles to be fundamentally different. In the former case, the helical magnetic component leads to an inverse cascade of magnetic energy. We derived a semi-analytic formula for the evolution of large-scale magnetic field using α coefficient and compared it with the simulation data. But in the latter case, the α effect, including other conventional dynamo theories, is not suitable to describe the inverse transfer of non-helical magnetic energy. To obtain a better understanding of the physics at work here, we introduced a 'field structure model' based on the magnetic induction equation in the presence of inhomogeneities. This model illustrates how the curl of the electromotive force leads to the build up of a large-scale magnetic field without the requirement of magnetic helicity. And we applied a quasi-normal approximation to the inverse transfer of magnetic energy.
Logan, Nikolas C.; Park, Jong -Kyu; Paz-Soldan, Carloa; ...
2016-02-05
This paper presents a single mode model that accurately predicts the coupling of applied nonaxisymmetric fields to the plasma response that induces neoclassical toroidal viscosity (NTV) torque in DIII-D H-mode plasmas. The torque is measured and modeled to have a sinusoidal dependence on the relative phase of multiple nonaxisymmetric field sources, including a minimum in which large amounts of nonaxisymmetric drive is decoupled from the NTV torque. This corresponds to the coupling and decoupling of the applied field to a NTV-driving mode spectrum. Modeling using the perturbed equilibrium nonambipolar transport (PENT) code confirms an effective single mode coupling between themore » applied field and the resultant torque, despite its inherent nonlinearity. Lastly, the coupling to the NTV mode is shown to have a similar dependence on the relative phasing as that of the IPEC dominant mode, providing a physical basis for the efficacy of this linear metric in predicting error field correction optima in NTV dominated regimes.« less
NASA Astrophysics Data System (ADS)
Logan, N. C.; Park, J.-K.; Paz-Soldan, C.; Lanctot, M. J.; Smith, S. P.; Burrell, K. H.
2016-03-01
This paper presents a single mode model that accurately predicts the coupling of applied nonaxisymmetric fields to the plasma response that induces neoclassical toroidal viscosity (NTV) torque in DIII-D H-mode plasmas. The torque is measured and modeled to have a sinusoidal dependence on the relative phase of multiple nonaxisymmetric field sources, including a minimum in which large amounts of nonaxisymmetric drive is decoupled from the NTV torque. This corresponds to the coupling and decoupling of the applied field to a NTV-driving mode spectrum. Modeling using the perturbed equilibrium nonambipolar transport (PENT) code confirms an effective single mode coupling between the applied field and the resultant torque, despite its inherent nonlinearity. The coupling to the NTV mode is shown to have a similar dependence on the relative phasing as that of the IPEC dominant mode, providing a physical basis for the efficacy of this linear metric in predicting error field correction optima in NTV dominated regimes.
Heterodyne range imaging as an alternative to photogrammetry
NASA Astrophysics Data System (ADS)
Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard
2007-01-01
Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.
2016-01-01
Molecular mechanics force fields that explicitly account for induced polarization represent the next generation of physical models for molecular dynamics simulations. Several methods exist for modeling induced polarization, and here we review the classical Drude oscillator model, in which electronic degrees of freedom are modeled by charged particles attached to the nuclei of their core atoms by harmonic springs. We describe the latest developments in Drude force field parametrization and application, primarily in the last 15 years. Emphasis is placed on the Drude-2013 polarizable force field for proteins, DNA, lipids, and carbohydrates. We discuss its parametrization protocol, development history, and recent simulations of biologically interesting systems, highlighting specific studies in which induced polarization plays a critical role in reproducing experimental observables and understanding physical behavior. As the Drude oscillator model is computationally tractable and available in a wide range of simulation packages, it is anticipated that use of these more complex physical models will lead to new and important discoveries of the physical forces driving a range of chemical and biological phenomena. PMID:26815602
The MEMS process of a micro friction sensor
NASA Astrophysics Data System (ADS)
Yuan, Ming-Quan; Lei, Qiang; Wang, Xiong
2018-02-01
The research and testing techniques of friction sensor is an important support for hypersonic aircraft. Compared with the conventional skin friction sensor, the MEMS skin friction sensor has the advantages of small size, high sensitivity, good stability and dynamic response. The MEMS skin friction sensor can be integrated with other flow field sensors whose process is compatible with MEMS skin friction sensor to achieve multi-physical measurement of the flow field; and the micro-friction balance sensor array enable to achieve large area and accurate measurement for the near-wall flow. A MEMS skin friction sensor structure is proposed, which sensing element not directly contacted with the flow field. The MEMS fabrication process of the sensing element is described in detail. The thermal silicon oxide is used as the mask to solve the selection ratio problem of silicon DRIE. The optimized process parameters of silicon DRIE: etching power 1600W/LF power 100 W; SF6 flux 360 sccm; C4F8 flux 300 sccm; O2 flux 300 sccm. With Cr/Au mask, etch depth of glass shallow groove can be controlled in 30°C low concentration HF solution; the spray etch and wafer rotate improve the corrosion surface quality of glass shallow groove. The MEMS skin friction sensor samples were fabricated by the above MEMS process, and results show that the error of the length and width of the elastic cantilever is within 2 μm, the depth error of the shallow groove is less than 0.03 μm, and the static capacitance error is within 0.2 pF, which satisfy the design requirements.
Understanding behavioral responses of fish to pheromones in natural freshwater environments
Johnson, Nicholas S.; Li, Weiming
2010-01-01
There is an abundance of experimental studies and reviews that describe odorant-mediated behaviors of fish in laboratory microcosms, but research in natural field conditions has received considerably less attention. Fish pheromone studies in laboratory settings can be highly productive and allow for controlled experimental designs; however, laboratory tanks and flumes often cannot replicate all the physical, physiological and social contexts associated with natural environments. Field experiments can be a critical step in affirming and enhancing understanding of laboratory discoveries and often implicate the ecological significance of pheromones employed by fishes. When findings from laboratory experiments have been further tested in field environments, often different and sometimes contradictory conclusions are found. Examples include studies of sea lamprey (Petromyzon marinus) mating pheromones and fish alarm substances. Here, we review field research conducted on fish pheromones and alarm substances, highlighting the following topics: (1) contradictory results obtained in laboratory and field experiments, (2) how environmental context and physiological status influences behavior, (3) challenges and constraints of aquatic field research and (4) innovative techniques and experimental designs that advance understanding of fish chemical ecology through field research.
de Assis, Thiago A; Dall'Agnol, Fernando F
2018-05-16
Numerical simulations are important when assessing the many characteristics of field emission related phenomena. In small simulation domains, the electrostatic effect from the boundaries is known to influence the calculated apex field enhancement factor (FEF) of the emitter, but no established dependence has been reported at present. In this work, we report the dependence of the lateral size, L, and the height, H, of the simulation domain on the apex-FEF of a single conducting ellipsoidal emitter. Firstly, we analyze the error, ε, in the calculation of the apex-FEF as a function of H and L. Importantly, our results show that the effects of H and L on ε are scale invariant, allowing one to predict ε for ratios L/h and H/h, where h is the height of the emitter. Next, we analyze the fractional change of the apex-FEF, δ, from a single emitter, [Formula: see text], and a pair, [Formula: see text]. We show that small relative errors in [Formula: see text] (i.e. [Formula: see text]), due to the finite domain size, are sufficient to alter the functional dependence [Formula: see text], where c is the distance from the emitters in the pair. We show that [Formula: see text] obeys a recently proposed power law decay (Forbes 2016 J. Appl. Phys. 120 054302), at sufficiently large distances in the limit of infinite domain size ([Formula: see text], say), which is not observed when using a long time established exponential decay (Bonard et al 2001 Adv. Mater. 13 184) or a more sophisticated fitting formula proposed recently by Harris et al (2015 AIP Adv. 5 087182). We show that the inverse-third power law functional dependence is respected for various systems like infinity arrays and small clusters of emitters with different shapes. Thus, [Formula: see text], with m = 3, is suggested to be a universal signature of the charge-blunting effect in small clusters or arrays, at sufficient large distances between emitters with any shape. These results improve the physical understanding of the field electron emission theory to accurately characterize emitters in small clusters or arrays.
Stress Recovery and Error Estimation for 3-D Shell Structures
NASA Technical Reports Server (NTRS)
Riggs, H. R.
2000-01-01
The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).
NASA Astrophysics Data System (ADS)
Hellwagner, Johannes; Sharma, Kshama; Tan, Kong Ooi; Wittmann, Johannes J.; Meier, Beat H.; Madhu, P. K.; Ernst, Matthias
2017-06-01
Pulse imperfections like pulse transients and radio-frequency field maladjustment or inhomogeneity are the main sources of performance degradation and limited reproducibility in solid-state nuclear magnetic resonance experiments. We quantitatively analyze the influence of such imperfections on the performance of symmetry-based pulse sequences and describe how they can be compensated. Based on a triple-mode Floquet analysis, we develop a theoretical description of symmetry-based dipolar recoupling sequences, in particular, R2 6411, calculating first- and second-order effective Hamiltonians using real pulse shapes. We discuss the various origins of effective fields, namely, pulse transients, deviation from the ideal flip angle, and fictitious fields, and develop strategies to counteract them for the restoration of full transfer efficiency. We compare experimental applications of transient-compensated pulses and an asynchronous implementation of the sequence to a supercycle, SR26, which is known to be efficient in compensating higher-order error terms. We are able to show the superiority of R26 compared to the supercycle, SR26, given the ability to reduce experimental error on the pulse sequence by pulse-transient compensation and a complete theoretical understanding of the sequence.
NASA Technical Reports Server (NTRS)
Voorhies, C. V.; Langel, R. A.; Slavin, J.; Lancaster, E. R.; Jones, S.
1991-01-01
Prelaunch and postlaunch calibration plans for the APAFO magnetometer experiment are presented. A study of tradeoffs between boom length and spacecraft field is described; the results are summarized. The prelaunch plan includes: calibration of the Vector Fluxgate Magnetometer (VFM), Star Sensors, and Scalar Helium Magnetometer (SHM); optical bench integration; and acquisition of basic spacecraft field data. Postlaunch calibration has two phases. In phase one, SHM data are used to calibrate the VFM, total vector magnetic field data are used to calibrate a physical model of the spacecraft field, and both calibrations are refined by iteration. In phase two, corrected vector data are transformed into geocentric coordinates, previously undetected spacecraft fields are isolated, and initial geomagnetic field models are computed. Provided the SHM is accurate to the required 1.0 nT and can be used to calibrate the VFM to the required 3.0- nT accuracy, the tradeoff study indicates that a 12 m boom and a spacecraft field model uncertainty of 5 percent together allow the 1.0 nT spacecraft field error requirement to be met.
Getting in Touch with Our Feelings: The Emotional Geographies of Gender Relations in PETE
ERIC Educational Resources Information Center
Dowling, Fiona
2008-01-01
This paper attempts to illustrate how embodied ways of knowing may enhance our theoretical understanding within the field of physical education teacher education (PETE). It seeks to illustrate how teacher educators' viewpoints and understanding of gender relations are inevitably linked to socially constructed webs of emotions, as much as to…
Imaging plasmas at the Earth and other planets
NASA Astrophysics Data System (ADS)
Mitchell, D. G.
2006-05-01
The field of space physics, both at Earth and at other planets, was for decades a science based on local observations. By stitching together measurements of plasmas and fields from multiple locations either simultaneously or for similar conditions over time, and by comparing those measurements against models of the physical systems, great progress was made in understanding the physics of Earth and planetary magnetospheres, ionospheres, and their interactions with the solar wind. However, the pictures of the magnetospheres were typically statistical, and the large-scale global models were poorly constrained by observation. This situation changed dramatically with global auroral imaging, which provided snapshots and movies of the effects of field aligned currents and particle precipitation over the entire auroral oval during quiet and disturbed times. And with the advent of global energetic neutral atom (ENA) and extreme ultraviolet (EUV) imaging, global constraints have similarly been added to ring current and plasmaspheric models, respectively. Such global constraints on global models are very useful for validating the physics represented in those models, physics of energy and momentum transport, electric and magnetic field distribution, and magnetosphere-ionosphere coupling. These techniques are also proving valuable at other planets. For example with Hubble Space Telescope imaging of Jupiter and Saturn auroras, and ENA imaging at Jupiter and Saturn, we are gaining new insights into the magnetic fields, gas-plasma interactions, magnetospheric dynamics, and magnetosphere-ionosphere coupling at the giant planets. These techniques, especially ENA and EUV imaging, rely on very recent and evolving technological capabilities. And because ENA and EUV techniques apply to optically thin media, interpretation of their measurements require sophisticated inversion procedures, which are still under development. We will discuss the directions new developments in imaging are taking, what technologies and mission scenarios might best take advantage of them, and how our understanding of the Earth's and other planets' plasma environments may benefit from such advancements.
NASA Astrophysics Data System (ADS)
Turner, Andrew; Bhat, Ganapati; Evans, Jonathan; Madan, Ranju; Marsham, John; Martin, Gill; Mitra, Ashis; Mrudula, Gm; Parker, Douglas; Pattnaik, Sandeep; Rajagopal, En; Taylor, Christopher; Tripathi, Sachchida
2017-04-01
The INCOMPASS project uses data from a field and aircraft measurement campaign during the 2016 monsoon onset to better understand and predict monsoon rainfall. The monsoon supplies the majority of water in South Asia, however modelling and forecasting the monsoon from days to the season ahead is limited by large model errors that develop quickly. Likely problems lie in physical parametrizations such as convection, the boundary layer and land surface. At the same time, lack of detailed observations prevents more thorough understanding of monsoon circulation and its interaction with the land surface; a process governed by boundary layer and convective cloud dynamics. From May to July 2016, INCOMPASS used a modified BAe-146 jet aircraft operated by the UK Facility for Airborne Atmospheric Measurements (FAAM), for the first project of this scale in India. The India and UK team flew around 100 hours of science sorties from bases in northern and southern India. Flights from Lucknow in the northern plains took measurements to the west and southeast to allow sampling of the complete contrast from dry desert air to the humid environment over the north Bay of Bengal. These routes were repeated in the pre-monsoon and monsoon phases, measuring contrasting surface and boundary layer structures. In addition, flights from the southern base in Bengaluru measured contrasts from the Arabian Sea, across the intense rains of the Western Ghats mountains, over the rain shadow in southeast India and over the southern Bay of Bengal. Flight planning was performed with the aid of forecasts from a new UK Met Office 4km limited area model. INCOMPASS also installed a network of surface flux towers, as well as operating a cloud-base ceilometer and performing intensive radiosonde launches from a supersite in Kanpur. Here we will outline preliminary results from the field campaign including new observations of the surface, boundary layer structure and atmospheric profiles from aircraft data. We also include initial results from nested high-resolution modelling experiments of the 2016 monsoon, at a resolution of 4km in comparison with bespoke regional forecasts run throughout the field campaign.
Casting the Coronal Magnetic Field Reconstruction Tools in 3D Using the MHD Bifrost Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleishman, Gregory D.; Loukitcheva, Maria; Anfinogentov, Sergey
Quantifying the coronal magnetic field remains a central problem in solar physics. Nowadays, the coronal magnetic field is often modeled using nonlinear force-free field (NLFFF) reconstructions, whose accuracy has not yet been comprehensively assessed. Here we perform a detailed casting of the NLFFF reconstruction tools, such as π -disambiguation, photospheric field preprocessing, and volume reconstruction methods, using a 3D snapshot of the publicly available full-fledged radiative MHD model. Specifically, from the MHD model, we know the magnetic field vector in the entire 3D domain, which enables us to perform a “voxel-by-voxel” comparison of the restored and the true magnetic fieldsmore » in the 3D model volume. Our tests show that the available π -disambiguation methods often fail in the quiet-Sun areas dominated by small-scale magnetic elements, while they work well in the active region (AR) photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although it does produce a more force-free boundary condition, also results in some effective “elevation” of the magnetic field components. This “elevation” height is different for the longitudinal and transverse components, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolations performed starting from the actual AR photospheric magnetogram are free from this systematic error, while other metrics are comparable with those for extrapolations from the preprocessed magnetograms. This finding favors the use of extrapolations from the original photospheric magnetogram without preprocessing. Our tests further suggest that extrapolations from a force-free chromospheric boundary produce measurably better results than those from a photospheric boundary.« less
NASA Astrophysics Data System (ADS)
Abbasnezhadi, K.; Rasmussen, P. F.; Stadnyk, T.
2014-12-01
To gain a better understanding of the spatiotemporal distribution of rainfall over the Churchill River basin, this study was undertaken. The research incorporates gridded precipitation data from the Canadian Precipitation Analysis (CaPA) system. CaPA has been developed by Environment Canada and provides near real-time precipitation estimates on a 10 km by 10 km grid over North America at a temporal resolution of 6 hours. The spatial fields are generated by combining forecasts from the Global Environmental Multiscale (GEM) model with precipitation observations from the network of synoptic weather stations. CaPA's skill is highly influenced by the number of weather stations in the region of interest as well as by the quality of the observations. In an attempt to evaluate the performance of CaPA as a function of the density of the weather station network, a dual-stage design algorithm to simulate CaPA is proposed which incorporates generated weather fields. More specifically, we are adopting a controlled design algorithm which is generally known as Observing System Simulation Experiment (OSSE). The advantage of using the experiment is that one can define reference precipitation fields assumed to represent the true state of rainfall over the region of interest. In the first stage of the defined OSSE, a coupled stochastic model of precipitation and temperature gridded fields is calibrated and validated. The performance of the generator is then validated by comparing model statistics with observed statistics and by using the generated samples as input to the WATFLOOD™ hydrologic model. In the second stage of the experiment, in order to account for the systematic error of station observations and GEM fields, representative errors are to be added to the reference field using by-products of CaPA's variographic analysis. These by-products explain the variance of station observations and background errors.
Systematic effects in the HfF+-ion experiment to search for the electron electric dipole moment
NASA Astrophysics Data System (ADS)
Petrov, A. N.
2018-05-01
The energy splittings for J =1 , F =3 /2 , | mF|=3 /2 hyperfine levels of the 3Δ1 electronic state of 180Hf+19F ion are calculated as functions of the external variable electric and magnetic fields within two approaches. In the first one, the transition to the rotating frame is performed, whereas in the second approach, the quantization of rotating electromagnetic field is performed. Calculations are required for understanding possible systematic errors in the experiment to search for the electron electric dipole moment (e EDM ) with the 180Hf+19F ion.
Semiconductor Characterization: from Growth to Manufacturing
NASA Astrophysics Data System (ADS)
Colombo, Luigi
The successful growth and/or deposition of materials for any application require basic understanding of the materials physics for a given device. At the beginning, the first and most obvious characterization tool is visual observation; this is particularly true for single crystal growth. The characterization tools are usually prioritized in order of ease of measurement, and have become especially sophisticated as we have moved from the characterization of macroscopic crystals and films to atomically thin materials and nanostructures. While a lot attention is devoted to characterization and understanding of materials physics at the nano level, the characterization of single crystals as substrates or active components is still critically important. In this presentation, I will review and discuss the basic materials characterization techniques used to get to the materials physics to bring crystals and thin films from research to manufacturing in the fields of infrared detection, non-volatile memories, and transistors. Finally I will present and discuss metrology techniques used to understand the physics and chemistry of atomically thin two-dimensional materials for future device applications.
On the Limitations of Thought Experiments in Physics and the Consequences for Physics Education.
ERIC Educational Resources Information Center
Reiner, Miriam; Burko, Lior M.
2003-01-01
Focuses on the role of Thought Experiments (TEs) in ongoing processes of conceptual refinement for physicists and physics learners. Analyze TEs related to stellar evolution and general relativity. Identifies the stages at which crucial errors are made in these TEs and the cognitive processes which lead to these errors. Discusses implications for…
ERIC Educational Resources Information Center
Saarelainen, M.; Laaksonen, A.; Hirvonen, P. E.
2007-01-01
This study explores undergraduate students' understanding and reasoning models of electric and magnetic fields. The results indicate that the tested students had various alternative concepts in applying their reasoning to certain CSEM test questions. The total number of physics students tested at the beginning of the first course on…
An Analysis of Teachers' Concept Confusion Concerning Electric and Magnetic Fields
ERIC Educational Resources Information Center
Hekkenberg, Ans; Lemmer, Miriam; Dekkers, Peter
2015-01-01
In an exploratory study, 36 South African physical science teachers' understanding of basic concepts concerning electric and magnetic fields was studied from a perspective of possible concept confusion. Concept confusion is said to occur when features of one concept are incorrectly attributed to a different concept, in the case of this study to…
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
The effects of fatigue on performance in simulated nursing work.
Barker, Linsey M; Nussbaum, Maury A
2011-09-01
Fatigue is associated with increased rates of medical errors and healthcare worker injuries, yet existing research in this sector has not considered multiple dimensions of fatigue simultaneously. This study evaluated hypothesised causal relationships between mental and physical fatigue and performance. High and low levels of mental and physical fatigue were induced in 16 participants during simulated nursing work tasks in a laboratory setting. Task-induced changes in fatigue dimensions were quantified using both subjective and objective measures, as were changes in performance on physical and mental tasks. Completing the simulated work tasks increased total fatigue, mental fatigue and physical fatigue in all experimental conditions. Higher physical fatigue adversely affected measures of physical and mental performance, whereas higher mental fatigue had a positive effect on one measure of mental performance. Overall, these results suggest causal effects between manipulated levels of mental and physical fatigue and task-induced changes in mental and physical performance. STATEMENT OF RELEVANCE: Nurse fatigue and performance has implications for patient and provider safety. Results from this study demonstrate the importance of a multidimensional view of fatigue in understanding the causal relationships between fatigue and performance. The findings can guide future work aimed at predicting fatigue-related performance decrements and designing interventions.
Physics of Gravitational Interaction: Geometry of Space or Quantum Field in Space
NASA Astrophysics Data System (ADS)
Baryshev, Yurij
2006-03-01
Thirring-Feynman's tensor field approach to gravitation opens new understanding on the physics of gravitational interaction and stimulates novel experiments on the nature of gravity. According to Field Gravity, the universal gravity force is caused by exchange of gravitons - the quanta of gravity field. Energy of this field is well-defined and excludes the singularity. All classical relativistic effects are the same as in General Relativity. The intrinsic scalar (spin 0) part of gravity field corresponds to ``antigravity'' and only together with the pure tensor (spin 2) part gives the usual Newtonian force. Laboratory and astrophysical experiments which may test the predictions of FG, will be performed in near future. In particular, observations at gravity observatories with bar and interferometric detectors, like Explorer, Nautilus, LIGO and VIRGO, will check the predicted scalar gravitational waves from supernova explosions. New types of cosmological models in Minkowski space are possible too.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
Sherwood Washburn's New physical anthropology: rejecting the "religion of taxonomy".
Mikels-Carrasco, Jessica
2012-01-01
Many physical anthropologists and nearly all of those studying primatology today can trace their academic genealogy to Sherwood Larned Washburn. His New physical anthropology, fully articulated in a 1951 paper, proposed that the study of hominid evolution must link understandings of form, function, and behavior along with the environment in order most accurately to reconstruct the evolution of our ancestors. This shift of concentration from strictly analyzing fossil remains to what Washburn termed adaptive complexes challenged not only Washburn's predecessors, but also led Washburn to critique the very system of academia within which he worked. Collaboration across multiple disciplines, linking the four fields of anthropology in order to understand humans and application of our understandings of human evolution to the betterment of society, are the hallmarks of Washburnian anthropology. In this paper I will explore how Washburn's New physical anthropology led him to not only change the research direction in physical anthropology, but also to challenge the academia within which he worked. I will conclude by reflecting on the prospects of continuing to practice Washburnian Anthropology.
Search for an Electric Dipole Moment (EDM) of 199Hg
NASA Astrophysics Data System (ADS)
Heckel, Blayne
2017-04-01
The observation of a non-zero EDM of an atom or elementary particle, at current levels of experimental sensitivity, would imply CP violation beyond the CKM matrix of the standard model of particle physics. Additional sources of CP violation have been proposed to help explain the excess of matter over anti-matter in our universe and the magnitude of ΘQCD, the strength of CP violation in the strong interaction, remains unknown. We have recently completed a set of measurements on the EDM of 199Hg, sensitive to both new sources of CP violation and ΘQCD. The experiment compares the phase accumulated by precessing Hg spins in vapor cells with electric fields parallel and anti-parallel to a common magnetic field. Our new result represents a factor of 5 improvement over previous results. A description of the EDM experiment, data, systematic error considerations will be presented. This work was supported by NSF Grant No. 1306743 and by the DOE Office of Nuclear Physics under Award No. DE-FG02-97ER41020.
Practices to enable the geophysical research spectrum: from fundamentals to applications
NASA Astrophysics Data System (ADS)
Kang, S.; Cockett, R.; Heagy, L. J.; Oldenburg, D.
2016-12-01
In a geophysical survey, a source injects energy into the earth and a response is measured. These physical systems are governed by partial differential equations and their numerical solutions are obtained by discretizing the earth. Geophysical simulations and inversions are tools for understanding physical responses and constructing models of the subsurface given a finite amount of data. SimPEG (http://simpeg.xyz) is our effort to synthesize geophysical forward and inverse methodologies into a consistent framework. The primary focus of our initial development has been on the electromagnetics (EM) package, with recent extensions to magnetotelluric, direct current (DC), and induced polarization. Across these methods, and applied geophysics in general, we require tools to explore and build an understanding of the physics (behaviour of fields, fluxes), and work with data to produce models through reproducible inversions. If we consider DC or EM experiments, with the aim of understanding responses from subsurface conductors, we require resources that provide multiple "entry points" into the geophysical problem. To understand the physical responses and measured data, we must simulate the physical system and visualize electric fields, currents, and charges. Performing an inversion requires that many moving pieces be brought together: simulation, physics, linear algebra, data processing, optimization, etc. Each component must be trusted, accessible to interrogation and manipulation, and readily combined in order to enable investigation into inversion methodologies. To support such research, we not only require "entry points" into the software, but also extensibility to new situations. In our development of SimPEG, we have sought to use leading practices in software development with the aim of supporting and promoting collaborations across a spectrum of geophysical research: from fundamentals to applications. Designing software to enable this spectrum puts unique constraints on both the architecture of the codebase as well as the development practices that are employed. In this presentation, we will share some lessons learned and, in particular, how our prioritization of testing, documentation, and refactoring has impacted our own research and fostered collaborations.
Is flat fielding safe for precision CCD astronomy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Is flat fielding safe for precision CCD astronomy?
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
2017-07-06
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Steps towards a consistent Climate Forecast System Reanalysis wave hindcast (1979-2016)
NASA Astrophysics Data System (ADS)
Stopa, Justin E.; Ardhuin, Fabrice; Huchet, Marion; Accensi, Mickael
2017-04-01
Surface gravity waves are being increasingly recognized as playing an important role within the climate system. Wave hindcasts and reanalysis products of long time series (>30 years) have been instrumental in understanding and describing the wave climate for the past several decades and have allowed a better understanding of extreme waves and inter-annual variability. Wave hindcasts have the advantage of covering the oceans in higher space-time resolution than possible with conventional observations from satellites and buoys. Wave reanalysis systems like ECWMF's ERA-Interim directly included a wave model that is coupled to the ocean and atmosphere, otherwise reanalysis wind fields are used to drive a wave model to reproduce the wave field in long time series. The ERA Interim dataset is consistent in time, but cannot adequately resolve extreme waves. On the other hand, the NCEP Climate Forecast System (CFSR) wind field better resolves the extreme wind speeds, but suffers from discontinuous features in time which are due to the quantity and quality of the remote sensing data incorporated into the product. Therefore, a consistent hindcast that resolves the extreme waves still alludes us limiting our understanding of the wave climate. In this study, we systematically correct the CFSR wind field to reproduce a homogeneous wave field in time. To verify the homogeneity of our hindcast we compute error metrics on a monthly basis using the observations from a merged altimeter wave database which has been calibrated and quality controlled from 1985-2016. Before 1985 only few wave observations exist and are limited to a select number of wave buoys mostly in the North Hemisphere. Therefore we supplement our wave observations with seismic data which responds to nonlinear wave interactions created by opposing waves with nearly equal wavenumbers. Within the CFSR wave hindcast, we find both spatial and temporal discontinuities in the error metrics. The Southern Hemisphere often has wind speed biases larger than the Northern Hemisphere and we propose a simple correction to reduce these features by applying a taper shaped by a half-Hanning window. The discontinuous features in time are corrected by scaling the entire wind field by percentages ranging typically ranging from 1-3%. Our analysis is performed on monthly time series and we expect the monthly statistics to be more adequate for climate studies.
Purification of Logic-Qubit Entanglement.
Zhou, Lan; Sheng, Yu-Bo
2016-07-05
Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network.
Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications
NASA Technical Reports Server (NTRS)
Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.
2006-01-01
The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.
ERIC Educational Resources Information Center
Schneider, Omar; Neto, Amarílio Ferreira; da Silva Mello, André; dos Santos, Wagner; Votre, Sebastião Josué; Assunção, Wallace Rocha
2016-01-01
This study investigates the American presence and influences in the physical education press to understand the way in which that presence influenced and contributed to the production of a sports culture in the first half of the twentieth century. As historical sources, the study uses periodicals in the field that were published in the period…
The physics of life: one molecule at a time
Leake, Mark C.
2013-01-01
The esteemed physicist Erwin Schrödinger, whose name is associated with the most notorious equation of quantum mechanics, also wrote a brief essay entitled ‘What is Life?’, asking: ‘How can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?’ The 60+ years following this seminal work have seen enormous developments in our understanding of biology on the molecular scale, with physics playing a key role in solving many central problems through the development and application of new physical science techniques, biophysical analysis and rigorous intellectual insight. The early days of single-molecule biophysics research was centred around molecular motors and biopolymers, largely divorced from a real physiological context. The new generation of single-molecule bioscience investigations has much greater scope, involving robust methods for understanding molecular-level details of the most fundamental biological processes in far more realistic, and technically challenging, physiological contexts, emerging into a new field of ‘single-molecule cellular biophysics’. Here, I outline how this new field has evolved, discuss the key active areas of current research and speculate on where this may all lead in the near future. PMID:23267186
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2016-10-01
Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Panagopoulos, Dimitris J.; Johansson, Olle; Carlo, George L.
2013-01-01
Purpose To evaluate SAR as a dosimetric quantity for EMF bioeffects, and identify ways for increasing the precision in EMF dosimetry and bioactivity assessment. Methods We discuss the interaction of man-made electromagnetic waves with biological matter and calculate the energy transferred to a single free ion within a cell. We analyze the physics and biology of SAR and evaluate the methods of its estimation. We discuss the experimentally observed non-linearity between electromagnetic exposure and biological effect. Results We find that: a) The energy absorbed by living matter during exposure to environmentally accounted EMFs is normally well below the thermal level. b) All existing methods for SAR estimation, especially those based upon tissue conductivity and internal electric field, have serious deficiencies. c) The only method to estimate SAR without large error is by measuring temperature increases within biological tissue, which normally are negligible for environmental EMF intensities, and thus cannot be measured. Conclusions SAR actually refers to thermal effects, while the vast majority of the recorded biological effects from man-made non-ionizing environmental radiation are non-thermal. Even if SAR could be accurately estimated for a whole tissue, organ, or body, the biological/health effect is determined by tiny amounts of energy/power absorbed by specific biomolecules, which cannot be calculated. Moreover, it depends upon field parameters not taken into account in SAR calculation. Thus, SAR should not be used as the primary dosimetric quantity, but used only as a complementary measure, always reporting the estimating method and the corresponding error. Radiation/field intensity along with additional physical parameters (such as frequency, modulation etc) which can be directly and in any case more accurately measured on the surface of biological tissues, should constitute the primary measure for EMF exposures, in spite of similar uncertainty to predict the biological effect due to non-linearity. PMID:23750202
The Soil Sink for Nitrous Oxide: Trivial Amount but Challenging Question
NASA Astrophysics Data System (ADS)
Davidson, E. A.; Savage, K. E.; Sihi, D.
2015-12-01
Net uptake of atmospheric nitrous oxide (N2O) has been observed sporadically for many years. Such observations have often been discounted as measurement error or noise, but they were reported frequently enough to gain some acceptance as valid. The advent of fast response field instruments with good sensitivity and precision has permitted confirmation that some soils can be small sinks of N2O. With regards to "closing the global N2O budget" the soil sink is trivial, because it is smaller than the error terms of most other budget components. Although not important from a global budget perspective, the existence of a soil sink for atmospheric N2O presents a fascinating challenge for understanding the physical, chemical, and biological processes that explain the sink. Reduction of N2O by classical biological denitrification requires reducing conditions generally found in wet soil, and yet we have measured the N2O sink in well drained soils, where we also simultaneously measure a sink for atmospheric methane (CH4). Co-occurrence of N2O reduction and CH4 oxidation would require a broad range of microsite conditions within the soil, spanning high and low oxygen concentrations. Abiotic sinks for N2O or other biological processes that consume N2O could exist, but have not yet been identified. We are attempting to simulate processes of diffusion of N2O, CH4, and O2 from the atmosphere and within a soil profile to determine if classical biological N2O reduction and CH4 oxidation at rates consistent with measured fluxes are plausible.
Field Scale Spatial Modelling of Surface Soil Quality Attributes in Controlled Traffic Farming
NASA Astrophysics Data System (ADS)
Guenette, Kris; Hernandez-Ramirez, Guillermo
2017-04-01
The employment of controlled traffic farming (CTF) can yield improvements to soil quality attributes through the confinement of equipment traffic to tramlines with the field. There is a need to quantify and explain the spatial heterogeneity of soil quality attributes affected by CTF to further improve our understanding and modelling ability of field scale soil dynamics. Soil properties such as available nitrogen (AN), pH, soil total nitrogen (STN), soil organic carbon (SOC), bulk density, macroporosity, soil quality S-Index, plant available water capacity (PAWC) and unsaturated hydraulic conductivity (Km) were analysed and compared among trafficked and un-trafficked areas. We contrasted standard geostatistical methods such as ordinary kriging (OK) and covariate kriging (COK) as well as the hybrid method of regression kriging (ROK) to predict the spatial distribution of soil properties across two annual cropland sites actively employing CTF in Alberta, Canada. Field scale variability was quantified more accurately through the inclusion of covariates; however, the use of ROK was shown to improve model accuracy despite the regression model composition limiting the robustness of the ROK method. The exclusion of traffic from the un-trafficked areas displayed significant improvements to bulk density, macroporosity and Km while subsequently enhancing AN, STN and SOC. The ability of the regression models and the ROK method to account for spatial trends led to the highest goodness-of-fit and lowest error achieved for the soil physical properties, as the rigid traffic regime of CTF altered their spatial distribution at the field scale. Conversely, the COK method produced the most optimal predictions for the soil nutrient properties and Km. The use of terrain covariates derived from light ranging and detection (LiDAR), such as of elevation and topographic position index (TPI), yielded the best models in the COK method at the field scale.
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano
2017-09-01
The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case
simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Bipasha; Davies, C. T. H.; de Oliveira, P. G.
We determine the contribution to the anomalous magnetic moment of the muon from themore » $$\\alpha^2_{\\mathrm{QED}}$$ hadronic vacuum polarization diagram using full lattice QCD and including $u/d$ quarks with physical masses for the first time. We use gluon field configurations that include $u$, $d$, $s$ and $c$ quarks in the sea at multiple values of the lattice spacing, multiple $u/d$ masses and multiple volumes that allow us to include an analysis of finite-volume effects. We obtain a result for $$a_{\\mu}^{\\mathrm{HVP,LO}}$$ of $667(6)(12)$, where the first error is from the lattice calculation and the second includes systematic errors from missing QED and isospin-breaking effects and from quark-line disconnected diagrams. Our result implies a discrepancy between the experimental determination of $$a_{\\mu}$$ and the Standard Model of 3$$\\sigma$$.« less
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.
The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, Cyril J.; Van Weverberg, Kwinten; Ma, H
2018-02-16
The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less
Analysis and Simulation of Near-Field Wave Motion Data from the Source Physics Experiment Explosions
2011-09-01
understanding and ability to model explosively generated seismic waves, particularly S-waves. The first SPE explosion (SPE1) consisted of a 100 kg shot at a...depth of 60 meters in granite (Climax Stock). The shot was well- recorded by an array of over 150 instruments, including both near-field wave motion...measurements as well as far- field seismic measurements. This paper focuses on measurements and modeling of the near-field data. A complimentary
Knowledge evolution in physics research: An analysis of bibliographic coupling networks.
Liu, Wenyuan; Nanetti, Andrea; Cheong, Siew Ann
2017-01-01
Even as we advance the frontiers of physics knowledge, our understanding of how this knowledge evolves remains at the descriptive levels of Popper and Kuhn. Using the American Physical Society (APS) publications data sets, we ask in this paper how new knowledge is built upon old knowledge. We do so by constructing year-to-year bibliographic coupling networks, and identify in them validated communities that represent different research fields. We then visualize their evolutionary relationships in the form of alluvial diagrams, and show how they remain intact through APS journal splits. Quantitatively, we see that most fields undergo weak Popperian mixing, and it is rare for a field to remain isolated/undergo strong mixing. The sizes of fields obey a simple linear growth with recombination. We can also reliably predict the merging between two fields, but not for the considerably more complex splitting. Finally, we report a case study of two fields that underwent repeated merging and splitting around 1995, and how these Kuhnian events are correlated with breakthroughs on Bose-Einstein condensation (BEC), quantum teleportation, and slow light. This impact showed up quantitatively in the citations of the BEC field as a larger proportion of references from during and shortly after these events.
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
Pushing particles in extreme fields
NASA Astrophysics Data System (ADS)
Gordon, Daniel F.; Hafizi, Bahman; Palastro, John
2017-03-01
The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.
Magnetic Field Measurements of the Spotted Yellow Dwarf DE Boo During 2001-2004
NASA Astrophysics Data System (ADS)
Plachinda, S.; Baklanova, D.; Butkovskaya, V.; Pankov, N.
2017-06-01
Spectropolarimetric observations of DE Boo have been performed at Crimean astrophysical observatory during 18 nights in 2001-2004. We present the result of the longitudinal magnetic field measurements on this star. The magnetic field varies from +44 G to -36 G with mean Standard Error (SE) of 8.2 G. For full array of the magnetic field measurements the difference between experimental errors and Monte Carlo errors is not statistically significant.
Gravitational collapse of dark energy field configurations and supermassive black hole formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jhalani, V.; Kharkwal, H.; Singh, A., E-mail: anupamsingh.iitk@gmail.com
Dark energy is the dominant component of the total energy density of our Universe. The primary interaction of dark energy with the rest of the Universe is gravitational. It is therefore important to understand the gravitational dynamics of dark energy. Since dark energy is a low-energy phenomenon from the perspective of particle physics and field theory, a fundamental approach based on fields in curved space should be sufficient to understand the current dynamics of dark energy. Here, we take a field theory approach to dark energy. We discuss the evolution equations for a generic dark energy field in curved space-timemore » and then discuss the gravitational collapse for dark energy field configurations. We describe the 3 + 1 BSSN formalism to study the gravitational collapse of fields for any general potential for the fields and apply this formalism to models of dark energy motivated by particle physics considerations. We solve the resulting equations for the time evolution of field configurations and the dynamics of space-time. Our results show that gravitational collapse of dark energy field configurations occurs and must be considered in any complete picture of our Universe. We also demonstrate the black hole formation as a result of the gravitational collapse of the dark energy field configurations. The black holes produced by the collapse of dark energy fields are in the supermassive black hole category with the masses of these black holes being comparable to the masses of black holes at the centers of galaxies.« less
Designing an operator interface? Consider user`s `psychology`
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toffer, D.E.
The modern operator interface is a channel of communication between operators and the plant that, ideally, provides them with information necessary to keep the plant running at maximum efficiency. Advances in automation technology have increased information flow from the field to the screen. New and improved Supervisory Control and Data Acquisition (SCADA) packages provide designers with powerful and open design considerations. All too often, however, systems go to the field designed for the software rather than the operator. Plant operators` jobs have changed fundamentally, from controlling their plants from out in the field to doing so from within control rooms.more » Control room-based operation does not denote idleness. Trained operators should be engaged in examination of plant status and cognitive evaluation of plant efficiencies. Designers who are extremely computer literate, often do not consider demographics of field operators. Many field operators have little knowledge of modern computer systems. As a result, they do not take full advantage of the interface`s capabilities. Designers often fail to understand the true nature of how operators run their plants. To aid field operators, designers must provide familiar controls and intuitive choices. To achieve success in interface design, it is necessary to understand the ways in which humans think conceptually, and to understand how they process this information physically. The physical and the conceptual are closely related when working with any type of interface. Designers should ask themselves: {open_quotes}What type of information is useful to the field operator?{close_quotes} Let`s explore an integration model that contains the following key elements: (1) Easily navigated menus; (2) Reduced chances for misunderstanding; (3) Accurate representations of the plant or operation; (4) Consistent and predictable operation; (5) A pleasant and engaging interface that conforms to the operator`s expectations. 4 figs.« less
Going To The Field: Immersing Student Researchers in Coupled Human-Natural Systems
NASA Astrophysics Data System (ADS)
Weissmann, G. S.; Ibarra, R.
2014-12-01
Taking students into the field can offer a rich, grounded understanding of a particular environment and of a particular scientific approach to attaining a desired observation. Going into the field immerses students into coupled human-natural systems, which introduces two key elements to teaching: experiential learning and socio-cultural context. While these elements can greatly enrich student learning, instructors have to take extra steps to scaffold this learning. This scaffolding can present physical scientists with a challenge: how to reconcile views that such pedagogical activities are 'extraneous'/not central to the pursuit of physical science. Here we offer perspectives as an anthropologist and as an environmental scientist on the value of a diverse pedagogical approach to conducting field studies involving students. Insights drawn from facilitating a range of field experiences (e.g., short-term study abroad, service-learning and independent/supervised research both home and abroad) will be shared regarding approaches to scaffolding student learning. We will focus on an approach that the scholarship of teaching and learning has long shown to be effective - what can be called a "wrap-around" approach to the field: preparation before, support during, and reflection afterward. Of these steps, the post-trip reflection on the experience is a key, and often under-utilized, strategy, and suggestions as to why this are offered. This approach not only helps students understand better the course content, but it also helps them understand the role that socio-cultural context plays in shaping both the research and in the state of the environment. We illustrate these different dimensions of the field experience with examples from our courses.
Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields
NASA Astrophysics Data System (ADS)
Bettadpur, S.
2012-04-01
The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.
Pérez-Hernández, J A; Roso, L; Plaja, L
2009-06-08
The physics of laser-mater interactions beyond the perturbative limit configures the field of extreme non-linear optics. Although most experiments have been done in the near infrared ( lambda
Bogdan Allemann, Inja; Kaufman, Joely
2011-01-01
Since the construction of the first laser in the 1960s, the role that lasers play in various medical specialities, including dermatology, has steadily increased. However, within the last 2 decades, the technological advances and the use of lasers in the field of dermatology have virtually exploded. Many treatments have only become possible with the use of lasers. Especially in aesthetic medicine, lasers are an essential tool in the treatment armamentarium. Due to better research and understanding of the physics of light and skin, there is now a wide and increasing array of different lasers and devices to choose from. The proper laser selection for each indication and treatment requires a profound understanding of laser physics and the basic laser principles. Understanding these principles will allow the laser operator to obtain better results and help avoid complications. This chapter will give an in-depth overview of the physical principles relevant in cutaneous laser surgery. Copyright © 2011 S. Karger AG, Basel.
CIV VUV FPI Interferometer for Transition Region Magnetography
NASA Technical Reports Server (NTRS)
Gary, G. A.
2005-01-01
Much in the same way photonics harnesses light for engineering and technology applications, solar physics harnesses light for the remote sensing of the sun. In photonics the vacuum ultraviolet region offers shorter wavelength and higher energies per photon, while in solar physics the VUV allows the remote sensing of the upper levels of the solar atmosphere where magnetic fields dominate the physics. Understanding solar magnetism is a major aim for astrophysics and for understanding solar-terrestrial interaction. The poster is on our instrument development program for a high-spectral-resolution, high-finesse, Vacuum Ultraviolet Fabry-Perot Interferometer (VUV FPI) for obtaining narrow-passband images, magnetograms, and Dopplergrams of the transition region emission line of CIV (155nm). The poster will cover how the V W interferometer will allow us to understand solar magnetism, what is special about the MSFC VUV FPI, and why the University of Toronto F2 eximer has been of particular value to this program.
PREFACE: Focus section on Hadronic Physics Focus section on Hadronic Physics
NASA Astrophysics Data System (ADS)
Roberts, Craig; Swanson, Eric
2007-07-01
Hadronic physics is the study of strongly interacting matter and its underlying theory, Quantum Chromodynamics (QCD). The field had its beginnings after World War Two, when hadrons were discovered in ever increasing numbers. Today, it encompasses topics like the quark-gluon structure of hadrons at varying scales, the quark-gluon plasma and hadronic matter at extreme temperature and density; it also underpins nuclear physics and has significant impact on particle physics, astrophysics, and cosmology. Among the goals of hadronic physics are to determine the parameters of QCD, understand the origin and characteristics of confinement, understand the dynamics and consequences of dynamical chiral symmetry breaking, explore the role of quarks and gluons in nuclei and in matter under extreme conditions and understand the quark and gluon structure of hadrons. In general, the process is one of discerning the relevant degrees of freedom and relating these to the fundamental fields of QCD. The emphasis is on understanding QCD, rather than testing it. The papers gathered in this special focus section of Journal of Physics G: Nuclear and Particle Physics attempt to cover this broad range of subjects. Alkofer and Greensite examine the issue of quark and gluon confinement with the focus on models of the QCD vacuum, lattice gauge theory investigations, and the relationship to the AdS/CFT correspondence postulate. Arrington et al. review nucleon form factors and their role in determining quark orbital momentum, the strangeness content of the nucleon, meson cloud effects, and the transition from nonperturbative to perturbative QCD dynamics. The physics associated with hadronic matter at high temperature and density and at low Bjorken-x at the Relativistic Heavy Ion Collider (RHIC), the SPS at CERN, and at the future LHC is summarized by d'Enterria. The article of Lee and Smith examines experiment and theory associated with electromagnetic meson production from nucleons and illustrates how the structure of the nucleon is revealed. Reimer reviews how the Drell--Yan process can be used to explore the sea quark structure of nucleons, thereby probing such phenomena as flavour asymmetry in the nucleon and nuclear medium modification of nucleon properties. The exploitation of the B factories has led to a resurgence of interest in heavy quark spectroscopy. Concurrently, interest in light quark spectroscopy and gluonic excitations remains high, with several new experimental efforts in the planning or building stages. The current status of all of this is reviewed by Rosner. Finally, Vogelsang summarizes the status of polarized deep inelastic lepton-nucleon scattering experiments at RHIC and their impact on the theoretical understanding of nucleon helicity structure, gluon polarization in the nucleus, and transverse spin asymmetries. Of course, hadronic physics is a much broader subject than can be conveyed in this special focus section; advances in effective field theory, lattice gauge theory, generalised parton distributions and many other subfields are not covered here. Nevertheless, we hope that this focus section will help the reader appreciate the vitality, breadth of endeavour, and the phenomenological richness of hadronic physics.
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D
NASA Astrophysics Data System (ADS)
La Haye, R. J.; Paz-Soldan, C.; Strait, E. J.
2015-02-01
DIII-D experiments show that fully penetrated resonant n = 1 error field locked modes in ohmic plasmas with safety factor q95 ≳ 3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n = 2/1) static error fields are shielded in ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption. Error field correction (EFC) is performed on DIII-D (in ITER relevant shape and safety factor q95 ≳ 3) with either the n = 1 C-coil (no handedness) or the n = 1 I-coil (with ‘dominantly’ resonant field pitch). Despite EFC, which allows significantly lower plasma density (a ‘figure of merit’) before penetration occurs, the resulting saturated islands have similar large size; they differ only in the phase of the locked mode after typically being pulled (by up to 30° toroidally) in the electron diamagnetic drift direction as they grow to saturation. Island amplification and phase shift are explained by a second change-of-state in which the classical tearing index changes from stable to marginal by the presence of the island, which changes the current density profile. The eventual island size is thus governed by the inherent stability and saturation mechanism rather than the driving error field.
First measurements of error fields on W7-X using flux surface mapping
Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...
2016-08-03
Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Sobel, Michael E; Lindquist, Martin A
2014-07-01
Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.
Probabilistic In Situ Stress Estimation and Forecasting using Sequential Data Assimilation
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Dinther, Y.; Kuensch, H. R.
2017-12-01
Our physical understanding and forecasting ability of earthquakes, and other solid Earth dynamic processes, is significantly hampered by limited indications on the evolving state of stress and strength on faults. Integrating observations and physics-based numerical modeling to quantitatively estimate this evolution of a fault's state is crucial. However, systematic attempts are limited and tenuous, especially in light of the scarcity and uncertainty of natural data and the difficulty of modelling the physics governing earthquakes. We adopt the statistical framework of sequential data assimilation - extensively developed for weather forecasting - to efficiently integrate observations and prior knowledge in a forward model, while acknowledging errors in both. To prove this concept we perform a perfect model test in a simplified subduction zone setup, where we assimilate synthetic noised data on velocities and stresses from a single location. Using an Ensemble Kalman Filter, these data and their errors are assimilated to update 150 ensemble members from a Partial Differential Equation-driven seismic cycle model. Probabilistic estimates of fault stress and dynamic strength evolution capture the truth exceptionally well. This is possible, because the sampled error covariance matrix contains prior information from the physics that relates velocities, stresses and pressure at the surface to those at the fault. During the analysis step, stress and strength distributions are thus reconstructed such that fault coupling can be updated to either inhibit or trigger events. In the subsequent forecast step the physical equations are solved to propagate the updated states forward in time and thus provide probabilistic information on the occurrence of the next event. At subsequent assimilation steps, the system's forecasting ability turns out to be significantly better than that of a periodic recurrence model (requiring an alarm 17% vs. 68% of the time). This thus provides distinct added value with respect to using observations or numerical models separately. Although several challenges for applications to a natural setting remain, these first results indicate the large potential of data assimilation techniques for probabilistic seismic hazard assessment and other challenges in dynamic solid earth systems.
NASA Astrophysics Data System (ADS)
Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis
2017-03-01
Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.