Science.gov

Sample records for accurately describe observables

  1. Can accurate kinetic laws be created to describe chemical weathering?

    NASA Astrophysics Data System (ADS)

    Schott, Jacques; Oelkers, Eric H.; Bénézeth, Pascale; Goddéris, Yves; François, Louis

    2012-11-01

    Knowledge of the mechanisms and rates of mineral dissolution and growth, especially close to equilibrium, is essential for describing the temporal and spatial evolution of natural processes like weathering and its impact on CO2 budget and climate. The Surface Complexation approach (SC) combined with Transition State Theory (TST) provides an efficient framework for describing mineral dissolution over wide ranges of solution composition, chemical affinity, and temperature. There has been a large debate for several years, however, about the comparative merits of SC/TS versus classical growth theories for describing mineral dissolution and growth at near-to-equilibrium conditions. This study considers recent results obtained in our laboratory on oxides, hydroxides, silicates, and carbonates on near-equilibrium dissolution and growth via the combination of complementary microscopic and macroscopic techniques including hydrothermal atomic force microscopy, hydrogen-electrode concentration cell, mixed flow and batch reactors. Results show that the dissolution and precipitation of hydroxides, kaolinite, and hydromagnesite powders of relatively high BET surface area closely follow SC/TST rate laws with a linear dependence of both dissolution and growth rates on fluid saturation state (Ω) even at very close to equilibrium conditions (|ΔG| < 500 J/mol). This occurs because sufficient reactive sites (e.g. at kink, steps, and edges) are available at the exposed faces for dissolution and/or growth, allowing reactions to proceed via the direct and reversible detachment/attachment of reactants at the surface. In contrast, for magnesite and quartz, which have low surface areas, fewer active sites are available for growth and dissolution. Such minerals exhibit rates dependencies on Ω at near equilibrium conditions ranging from linear to highly non-linear functions of Ω, depending on the treatment of the crystals before the reaction. It follows that the form of the f

  2. Parameters Describing Earth Observing Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Zanoni, Vicki; Ryan, Robert E.; Pagnutti, Mary; Davis, Bruce; Markham, Brian; Storey, Jim

    2003-01-01

    The Earth science community needs to generate consistent and standard definitions for spatial, spectral, radiometric, and geometric properties describing passive electro-optical Earth observing sensors and their products. The parameters used to describe sensors and to describe their products are often confused. In some cases, parameters for a sensor and for its products are identical; in other cases, these parameters vary widely. Sensor parameters are bound by the fundamental performance of a system, while product parameters describe what is available to the end user. Products are often resampled, edge sharpened, pan-sharpened, or compressed, and can differ drastically from the intrinsic data acquired by the sensor. Because detailed sensor performance information may not be readily available to an international science community, standardization of product parameters is of primary performance. Spatial product parameters described include Modulation Transfer Function (MTF), point spread function, line spread function, edge response, stray light, edge sharpening, aliasing, ringing, and compression effects. Spectral product parameters discussed include full width half maximum, ripple, slope edge, and out-of-band rejection. Radiometric product properties discussed include relative and absolute radiometry, noise equivalent spectral radiance, noise equivalent temperature diffenence, and signal-to-noise ratio. Geometric product properties discussed include geopositional accuracy expressed as CE90, LE90, and root mean square error. Correlated properties discussed include such parameters as band-to-band registration, which is both a spectral and a spatial property. In addition, the proliferation of staring and pushbroom sensor architectures requires new parameters to describe artifacts that are different from traditional cross-track system artifacts. A better understanding of how various system parameters affect product performance is also needed to better ascertain the

  3. The variance needed to accurately describe jump height from vertical ground reaction force data.

    PubMed

    Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran

    2014-12-01

    In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.

  4. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    SciTech Connect

    Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  5. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    PubMed Central

    Madebene, Bruno; Ulusoy, Inga; Mancera, Luis; Scribano, Yohann; Chulkov, Sergey

    2011-01-01

    Summary We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters. PMID:22003450

  6. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    SciTech Connect

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.

  7. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    NASA Astrophysics Data System (ADS)

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-01

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed "pressure-matching" variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the "simplicity" of the model.

  8. Describing and compensating gas transport dynamics for accurate instantaneous emission measurement

    NASA Astrophysics Data System (ADS)

    Weilenmann, Martin; Soltic, Patrik; Ajtay, Delia

    Instantaneous emission measurements on chassis dynamometers and engine test benches are becoming increasingly usual for car-makers and for environmental emission factor measurement and calculation, since much more information about the formation conditions can be extracted than from the regulated bag measurements (integral values). The common exhaust gas analysers for the "regulated pollutants" (carbon monoxide, total hydrocarbons, nitrogen oxide, carbon dioxide) allow measurement at a rate of one to ten samples per second. This gives the impression of having after-the-catalyst emission information with that chronological precision. It has been shown in recent years, however, that beside the reaction time of the analysers, the dynamics of gas transport in both the exhaust system of the car and the measurement system last significantly longer than 1 s. This paper focuses on the compensation of all these dynamics convoluting the emission signals. Most analysers show linear and time-invariant reaction dynamics. Transport dynamics can basically be split into two phenomena: a pure time delay accounting for the transport of the gas downstream and a dynamic signal deformation since the gas is mixed by turbulence along the way. This causes emission peaks to occur which are smaller in height and longer in time at the sensors than they are after the catalyst. These dynamics can be modelled using differential equations. Both mixing dynamics and time delay are constant for modelling a raw gas analyser system, since the flow in that system is constant. In the exhaust system of the car, however, the parameters depend on the exhaust volume flow. For gasoline cars, the variation in overall transport time may be more than 6 s. It is shown in this paper how all these processes can be described by invertible mathematical models with the focus on the more complex case of the car's exhaust system. Inversion means that the sharp emission signal at the catalyst out location can be

  9. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    SciTech Connect

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss

  10. Towards accurate observation and modelling of Antarctic glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    King, M.

    2012-04-01

    The response of the solid Earth to glacial mass changes, known as glacial isostatic adjustment (GIA), has received renewed attention in the recent decade thanks to the Gravity Recovery and Climate Experiment (GRACE) satellite mission. GRACE measures Earth's gravity field every 30 days, but cannot partition surface mass changes, such as present-day cryospheric or hydrological change, from changes within the solid Earth, notably due to GIA. If GIA cannot be accurately modelled in a particular region the accuracy of GRACE estimates of ice mass balance for that region is compromised. This lecture will focus on Antarctica, where models of GIA are hugely uncertain due to weak constraints on ice loading history and Earth structure. Over the last years, however, there has been a step-change in our ability to measure GIA uplift with the Global Positioning System (GPS), including widespread deployments of permanent GPS receivers as part of the International Polar Year (IPY) POLENET project. I will particularly focus on the Antarctic GPS velocity field and the confounding effect of elastic rebound due to present-day ice mass changes, and then describe the construction and calibration of a new Antarctic GIA model for application to GRACE data, as well as highlighting areas where further critical developments are required.

  11. A hybrid stochastic-deterministic computational model accurately describes spatial dynamics and virus diffusion in HIV-1 growth competition assay.

    PubMed

    Immonen, Taina; Gibson, Richard; Leitner, Thomas; Miller, Melanie A; Arts, Eric J; Somersalo, Erkki; Calvetti, Daniela

    2012-11-01

    We present a new hybrid stochastic-deterministic, spatially distributed computational model to simulate growth competition assays on a relatively immobile monolayer of peripheral blood mononuclear cells (PBMCs), commonly used for determining ex vivo fitness of human immunodeficiency virus type-1 (HIV-1). The novel features of our approach include incorporation of viral diffusion through a deterministic diffusion model while simulating cellular dynamics via a stochastic Markov chain model. The model accounts for multiple infections of target cells, CD4-downregulation, and the delay between the infection of a cell and the production of new virus particles. The minimum threshold level of infection induced by a virus inoculum is determined via a series of dilution experiments, and is used to determine the probability of infection of a susceptible cell as a function of local virus density. We illustrate how this model can be used for estimating the distribution of cells infected by either a single virus type or two competing viruses. Our model captures experimentally observed variation in the fitness difference between two virus strains, and suggests a way to minimize variation and dual infection in experiments.

  12. Can the Dupuit-Thiem equation accurately describe the flow pattern induced by injection in a laboratory scale aquifer-well system?

    NASA Astrophysics Data System (ADS)

    Bonilla, Jose; Kalwa, Fritz; Händel, Falk; Binder, Martin; Stefan, Catalin

    2016-04-01

    The Dupuit-Thiem equation is normally used to assess flow towards a pumping well in unconfined aquifers under steady-state conditions. For the formulation of the equation it is assumed that flow is laminar, radial and horizontal towards the well. It is well known that these assumptions are not met in the vicinity of the well; some authors restrict the application of the equation only to a radius larger than 1.5-fold the aquifer thickness. In this study, the equation accuracy to predict the pressure head is evaluated as a simple and quick analytical method to describe the flow pattern for different injection rates in the LSAW. A laboratory scale aquifer-well system (LSAW) was implemented to study the aquifer recharge through wells. The LSAW consists of a 1.0 m-diameter tank with a height of 1.1 meters, filled with sand and a screened well in the center with a diameter of 0.025 m. A regulated outflow system establishes a controlled water level at the tank wall to simulate various aquifer thicknesses. The pressure head at the bottom of the tank along one axis can be measured to assess the flow profile every 0.1 m between the well and the tank wall. In order to evaluate the accuracy of the Dupuit-Thiem equation, a combination of different injection rates and aquifer thicknesses were simulated in the LSAW. Contrary to what was expected (significant differences between the measured and calculated pressure heads in the well), the absolute difference between the calculated and measured pressure head is less than 10%. Beside this, the highest differences are not observed in the well itself, but in the near proximity of it, at a radius of 0.1 m. The results further show that the difference between the calculated and measured pressure heads tends to decrease with higher flow rates. Despite its limitations (assumption of laminar and horizontal flow throughout the whole aquifer), the Dupuit-Thiem equation is considered to accurately represent the flow system in the LSAW.

  13. A geometric sequence that accurately describes allowed multiple conductance levels of ion channels: the "three-halves (3/2) rule".

    PubMed Central

    Pollard, J R; Arispe, N; Rojas, E; Pollard, H B

    1994-01-01

    Ion channels can express multiple conductance levels that are not integer multiples of some unitary conductance, and that interconvert among one another. We report here that for 26 different types of multiple conductance channels, all allowed conductance levels can be calculated accurately using the geometric sequence gn = g(o) (3/2)n, where gn is a conductance level and n is an integer > or = 0. We refer to this relationship as the "3/2 Rule," because the value of any term in the sequence of conductances (gn) can be calculated as 3/2 times the value of the preceding term (gn-1). The experimentally determined average value for "3/2" is 1.491 +/- 0.095 (sample size = 37, average +/- SD). We also verify the choice of a 3/2 ratio on the basis of error analysis over the range of ratio values between 1.1 and 2.0. In an independent analysis using Marquardt's algorithm, we further verified the 3/2 ratio and the assignment of specific conductances to specific terms in the geometric sequence. Thus, irrespective of the open time probability, the allowed conductance levels of these channels can be described accurately to within approximately 6%. We anticipate that the "3/2 Rule" will simplify description of multiple conductance channels in a wide variety of biological systems and provide an organizing principle for channel heterogeneity and differential effects of channel blockers. PMID:7524712

  14. BUFR2NetCDF - Converting Observational Data to a Self Describing Archive Format

    NASA Astrophysics Data System (ADS)

    Manross, K.; Caron, J. L.

    2013-12-01

    The majority of observational data collected and distributed by the World Meteorological Organization (WMO) Global Telecommunication System (GTS) are done so via the BUFR data format. There are many good reasons for this, such as the ability to store nearly any observational data type, flexibility for missing/unused parameters, and file compressibility, in other words BUFR is a very good transport container. BUFR data are a table driven data format, meaning that a separate table is maintained for the encoding/decoding of the data stored within. The WMO, as well as many other operational data centers such as the National Oceanic and Atmospheric Administration's (NOAA) National Center for Environmental Prediction (NCEP), maintain the metadata tables for storing and retrieving data within a BUFR file. Often the table data is not embedded with the BUFR files (though NCEP does embed the tables) and it can be challenging for the user to extract the table metadata, or locate the proper version of the table to obtain the needed metadata for a BUR file. More generally, non-expert users find BUFR a difficult format to parse, and to use and understand correctly. This presentation introduces a tool for converting BUFR data files to the self-describing NetCDF format. Of note is that the resulting NetCDF file will incorporate the new Discrete Sampling Geometries of the Climate and Forecast (CF) metadata convention. This will provide users of archived observational data: greater ease of use, assurance of data and metadata integrity for their research, and improved provenance.

  15. Provenance of things - describing geochemistry observation workflows using PROV-O

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.; Car, N. J.

    2015-12-01

    Geochemistry observations typically follow a complex preparation process after sample retrieval from the field. Description of these are required to allow readers and other data users to assess the reliability of the data produced, and to ensure reproducibility. While laboratory notebooks are used for private record-keeping, and laboratory information systems (LIMS) on a facility basis, this data is not generally published, and there are no standard formats for transfer. And while there is some standardization of workflows, this is often scoped to a lab, or an instrument. New procedures and workflows are being developed continually - in fact this is a key expectation in the development of the science. Thus formalization of the description of sample preparation and observations must be both rigorous and flexible. We have been exploring the use of the W3C Provenance model (PROV) to capture complete traces, including both the real world things and the data generated. PROV has a core data model that distinguishes between entities, agents and activities involved in producing a piece of data or thing in the world. While the design of PROV was primarily conditioned by stories concerning information resources, application is not restricted to the production of digital or information assets. PROV allowing a comprehensive trace of predecessor entities and transformations at any level of detail. In this paper we demonstrate the use of PROV for describing specimens managed for scientific observations. Two examples are considered: a geological sample which undergoes a typical preparation process for measurements of the concentration of a particular chemical substance, and the collection, taxonomic classification and eventual publication of an insect specimen. PROV enables the material that goes into the instrument to be linked back to the sample retrieved in the field. This complements the IGSN system, which focuses on registration of field sample identity to support the

  16. Southern Hemisphere Observations Towards the Accurate Alignment of the VLBI Frame and the Future Gaia Frame

    NASA Astrophysics Data System (ADS)

    de Witt, Aletha; Quick, Jonathan; Bertarini, Alessandra; Ploetz, Christian; Bourda, Géraldine; Charlot, Patrick

    2014-04-01

    The Gaia space astrometry mission to be launched on 19 December 2013 will construct for the first time a dense and highly-accurate extragalactic reference frame directly at optical wavelengths based on positions of thousands of QSOs. For consistency with the present International Celestial Reference Frame (ICRF) built from VLBI data, it will be essential that the Gaia frame be aligned onto the ICRF with the highest possible accuracy. To this end, a VLBI observing program dedicated to identifying the most suitable radio sources for this alignment has been initiated using the VLBA and the EVN. In addition, VLBI observations of suitable ICRF2 sources are being strengthened using the IVS network, leading to a total of 314 link sources. The purpose of this proposal is to extend such observing programs to the southern hemisphere since the distribution of the present link sources is very sparse south of -30 degree declination due to the geographical location of the VLBI arrays used for this project. As a first stage, we propose to observe 48 optically-bright radio sources in the far south using the LBA supplemented with the antennas of Warkworth (New Zeland) and O'Higgins (Antartica). Our goal is to image these potential link sources and determine those that are the most point-like on VLBI scales and therefore suitable for the Gaia-ICRF alignment. We anticipate that further observations may be necessary in the future to extend the sample and refine the astrometry of these sources.

  17. Applying an accurate spherical model to gamma-ray burst afterglow observations

    NASA Astrophysics Data System (ADS)

    Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.

    2013-05-01

    We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.

  18. New accurate ephemerides for the Galilean satellites of Jupiter. II. Fitting the observations

    NASA Astrophysics Data System (ADS)

    Lainey, V.; Arlot, J. E.; Vienne, A.

    2004-11-01

    We present a new model of the four Galilean satellites Io, Europa, Ganymede and Callisto, able to deliver accurate ephemerides over a very long time span (several centuries). In the first paper (Lainey et al. \\cite{Lainey04}, A&A, 420, 1171) we gave the equations of the dynamical model. Here we present the fit of this model to the observations, covering more than one century starting from 1891. Our ephemerides, based on this first fit called L1, are available on the web page of the IMCCE at the URL http://www.imcce.fr/ephemeride_eng.html. Tables 4-7 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/427/371

  19. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  20. Simultaneous auroral observations described in the historical records of China, Japan and Korea from ancient times to AD 1700

    NASA Astrophysics Data System (ADS)

    Willis, D. M.; Stephenson, F. R.

    2000-01-01

    Early auroral observations recorded in various oriental histories are examined in order to search for examples of strictly simultaneous and indisputably independent observations of the aurora borealis from spatially separated sites in East Asia. In the period up to ad 1700, only five examples have been found of two or more oriental auroral observations from separate sites on the same night. These occurred during the nights of ad 1101 January 31, ad 1138 October 6, ad 1363 July 30, ad 1582 March 8 and ad 1653 March 2. The independent historical evidence describing observations of mid-latitude auroral displays at more than one site in East Asia on the same night provides virtually incontrovertible proof that auroral displays actually occurred on these five special occasions. This conclusion is corroborated by the good level of agreement between the detailed auroral descriptions recorded in the different oriental histories, which furnish essentially compatible information on both the colour (or colours) of each auroral display and its approximate position in the sky. In addition, the occurrence of auroral displays in Europe within two days of auroral displays in East Asia, on two (possibly three) out of these five special occasions, suggests that a substantial number of the mid-latitude auroral displays recorded in the oriental histories are associated with intense geomagnetic storms.

  1. Can All Cosmological Observations Be Accurately Interpreted with a Unique Geometry?

    NASA Astrophysics Data System (ADS)

    Fleury, Pierre; Dupuy, Hélène; Uzan, Jean-Philippe

    2013-08-01

    The recent analysis of the Planck results reveals a tension between the best fits for (Ωm0, H0) derived from the cosmic microwave background or baryonic acoustic oscillations on the one hand, and the Hubble diagram on the other hand. These observations probe the Universe on very different scales since they involve light beams of very different angular sizes; hence, the tension between them may indicate that they should not be interpreted the same way. More precisely, this Letter questions the accuracy of using only the (perturbed) Friedmann-Lemaître geometry to interpret all the cosmological observations, regardless of their angular or spatial resolution. We show that using an inhomogeneous “Swiss-cheese” model to interpret the Hubble diagram allows us to reconcile the inferred value of Ωm0 with the Planck results. Such an approach does not require us to invoke new physics nor to violate the Copernican principle.

  2. Can all cosmological observations be accurately interpreted with a unique geometry?

    PubMed

    Fleury, Pierre; Dupuy, Hélène; Uzan, Jean-Philippe

    2013-08-30

    The recent analysis of the Planck results reveals a tension between the best fits for (Ω(m0), H(0)) derived from the cosmic microwave background or baryonic acoustic oscillations on the one hand, and the Hubble diagram on the other hand. These observations probe the Universe on very different scales since they involve light beams of very different angular sizes; hence, the tension between them may indicate that they should not be interpreted the same way. More precisely, this Letter questions the accuracy of using only the (perturbed) Friedmann-Lemaître geometry to interpret all the cosmological observations, regardless of their angular or spatial resolution. We show that using an inhomogeneous "Swiss-cheese" model to interpret the Hubble diagram allows us to reconcile the inferred value of Ω(m0) with the Planck results. Such an approach does not require us to invoke new physics nor to violate the Copernican principle. PMID:24033020

  3. Accurate stellar masses for SB2 components: Interferometric observations for Gaia validation

    NASA Astrophysics Data System (ADS)

    Halbwachs, J.-L.; Boffin, H. M. J.; Le Bouquin, J.-B.; Famaey, B.; Salomon, J.-B.; Arenou, F.; Pourbaix, D.; Anthonioz, F.; Grellmann, R.; Guieu, S.; Guillout, P.; Jorissen, A.; Kiefer, F.; Lebreton, Y.; Mazeh, T.; Nebot Gómez-Morán, A.; Sana, H.; Tal-Or, L.

    2015-12-01

    A sample of about 70 double-lined spectroscopic binaries (SB2) is followed with radial velocity (RV) measurements, in order to derive the masses of their components when the astrometric measurements of Gaia will be available. A subset of 6 SB2 was observed in interferometry with VLTI/PIONIER, and the components were separated for each binary. The RV measurements already obtained were combined with the interferometric observations and the masses of the components were derived. The accuracies of the 12 masses are presently between 0.4 and 7 %, but they will still be improved in the future. These masses will be used to validate the masses which will be obtained from Gaia. In addition, the parallaxes derived from the combined visual+spectroscopic orbits are compared to that of Hipparcos, and a mass-luminosity relation is derived in the infrared H band.

  4. How accurately do drivers evaluate their own driving behavior? An on-road observational study.

    PubMed

    Amado, Sonia; Arıkan, Elvan; Kaça, Gülin; Koyuncu, Mehmet; Turkan, B Nilay

    2014-02-01

    Self-assessment of driving skills became a noteworthy research subject in traffic psychology, since by knowing one's strenghts and weaknesses, drivers can take an efficient compensatory action to moderate risk and to ensure safety in hazardous environments. The current study aims to investigate drivers' self-conception of their own driving skills and behavior in relation to expert evaluations of their actual driving, by using naturalistic and systematic observation method during actual on-road driving session and to assess the different aspects of driving via comprehensive scales sensitive to different specific aspects of driving. 19-63 years old male participants (N=158) attended an on-road driving session lasting approximately 80min (45km). During the driving session, drivers' errors and violations were recorded by an expert observer. At the end of the driving session, observers completed the driver evaluation questionnaire, while drivers completed the driving self-evaluation questionnaire and Driver Behavior Questionnaire (DBQ). Low to moderate correlations between driver and observer evaluations of driving skills and behavior, mainly on errors and violations of speed and traffic lights was found. Furthermore, the robust finding that drivers evaluate their driving performance as better than the expert was replicated. Over-positive appraisal was higher among drivers with higher error/violation score and with the ones that were evaluated by the expert as "unsafe". We suggest that the traffic environment might be regulated by increasing feedback indicators of errors and violations, which in turn might increase the insight into driving performance. Improving self-awareness by training and feedback sessions might play a key role for reducing the probability of risk in their driving activity.

  5. Extracting Accurate and Precise Topography from Lroc Narrow Angle Camera Stereo Observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Speyerer, E. J.; Robinson, M. S.; LROC Team

    2016-06-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that acquire meter scale imaging. Stereo observations are acquired by imaging from two or more orbits, including at least one off-nadir slew. Digital terrain models (DTMs) generated from the stereo observations are controlled to Lunar Orbiter Laser Altimeter (LOLA) elevation profiles. With current processing methods, digital terrain models (DTM) have absolute accuracies commensurate than the uncertainties of the LOLA profiles (~10 m horizontally and ~1 m vertically) and relative horizontal and vertical precisions better than the pixel scale of the DTMs (2 to 5 m). The NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics, enabling detailed characterization of large geomorphic features and providing a key resource for future exploration planning. Currently, two percent of the lunar surface is imaged in NAC stereo and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur on all the terrestrial planets.

  6. OBSERVING SIMULATED PROTOSTARS WITH OUTFLOWS: HOW ACCURATE ARE PROTOSTELLAR PROPERTIES INFERRED FROM SEDs?

    SciTech Connect

    Offner, Stella S. R.; Robitaille, Thomas P.; Hansen, Charles E.; Klein, Richard I.; McKee, Christopher F.

    2012-07-10

    The properties of unresolved protostars and their local environment are frequently inferred from spectral energy distributions (SEDs) using radiative transfer modeling. In this paper, we use synthetic observations of realistic star formation simulations to evaluate the accuracy of properties inferred from fitting model SEDs to observations. We use ORION, an adaptive mesh refinement (AMR) three-dimensional gravito-radiation-hydrodynamics code, to simulate low-mass star formation in a turbulent molecular cloud including the effects of protostellar outflows. To obtain the dust temperature distribution and SEDs of the forming protostars, we post-process the simulations using HYPERION, a state-of-the-art Monte Carlo radiative transfer code. We find that the ORION and HYPERION dust temperatures typically agree within a factor of two. We compare synthetic SEDs of embedded protostars for a range of evolutionary times, simulation resolutions, aperture sizes, and viewing angles. We demonstrate that complex, asymmetric gas morphology leads to a variety of classifications for individual objects as a function of viewing angle. We derive best-fit source parameters for each SED through comparison with a pre-computed grid of radiative transfer models. While the SED models correctly identify the evolutionary stage of the synthetic sources as embedded protostars, we show that the disk and stellar parameters can be very discrepant from the simulated values, which is expected since the disk and central source are obscured by the protostellar envelope. Parameters such as the stellar accretion rate, stellar mass, and disk mass show better agreement, but can still deviate significantly, and the agreement may in some cases be artificially good due to the limited range of parameters in the set of model SEDs. Lack of correlation between the model and simulation properties in many individual instances cautions against overinterpreting properties inferred from SEDs for unresolved protostellar

  7. Describing the Sequence of Cognitive Decline in Alzheimer’s Disease Patients: Results from an Observational Study

    PubMed Central

    Henneges, Carsten; Reed, Catherine; Chen, Yun-Fei; Dell’Agnello, Grazia; Lebrec, Jeremie

    2016-01-01

    Background: Improved understanding of the pattern of cognitive decline in Alzheimer’s disease (AD) would be useful to assist primary care physicians in explaining AD progression to patients and caregivers. Objective: To identify the sequence in which cognitive abilities decline in community-dwelling patients with AD. Methods: Baseline data were analyzed from 1,495 patients diagnosed with probable AD and a Mini-Mental State Examination (MMSE) score ≤ 26 enrolled in the 18-month observational GERAS study. Proportional odds logistic regression models were applied to model MMSE subscores (orientation, registration, attention and concentration, recall, language, and drawing) and the corresponding subscores of the cognitive subscale of the Alzheimer’s Disease Assessment Scale (ADAS-cog), using MMSE total score as the index of disease progression. Probabilities of impairment start and full impairment were estimated at each MMSE total score level. Results: From the estimated probabilities for each MMSE subscore as a function of the MMSE total score, the first aspect of cognition to start being impaired was recall, followed by orientation in time, attention and concentration, orientation in place, language, drawing, and registration. For full impairment in subscores, the sequence was recall, drawing, attention and concentration, orientation in time, orientation in place, registration, and language. The sequence of cognitive decline for the corresponding ADAS-cog subscores was remarkably consistent with this pattern. Conclusion: The sequence of cognitive decline in AD can be visualized in an animation using probability estimates for key aspects of cognition. This might be useful for clinicians to set expectations on disease progression for patients and caregivers. PMID:27079700

  8. Observation-driven adaptive differential evolution and its application to accurate and smooth bronchoscope three-dimensional motion tracking.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian; Mori, Kensaku

    2015-08-01

    This paper proposes an observation-driven adaptive differential evolution algorithm that fuses bronchoscopic video sequences, electromagnetic sensor measurements, and computed tomography images for accurate and smooth bronchoscope three-dimensional motion tracking. Currently an electromagnetic tracker with a position sensor fixed at the bronchoscope tip is commonly used to estimate bronchoscope movements. The large tracking error from directly using sensor measurements, which may be deteriorated heavily by patient respiratory motion and the magnetic field distortion of the tracker, limits clinical applications. How to effectively use sensor measurements for precise and stable bronchoscope electromagnetic tracking remains challenging. We here exploit an observation-driven adaptive differential evolution framework to address such a challenge and boost the tracking accuracy and smoothness. In our framework, two advantageous points are distinguished from other adaptive differential evolution methods: (1) the current observation including sensor measurements and bronchoscopic video images is used in the mutation equation and the fitness computation, respectively and (2) the mutation factor and the crossover rate are determined adaptively on the basis of the current image observation. The experimental results demonstrate that our framework provides much more accurate and smooth bronchoscope tracking than the state-of-the-art methods. Our approach reduces the tracking error from 3.96 to 2.89 mm, improves the tracking smoothness from 4.08 to 1.62 mm, and increases the visual quality from 0.707 to 0.741. PMID:25660001

  9. When continuous observations just won't do: developing accurate and efficient sampling strategies for the laying hen.

    PubMed

    Daigle, Courtney L; Siegford, Janice M

    2014-03-01

    Continuous observation is the most accurate way to determine animals' actual time budget and can provide a 'gold standard' representation of resource use, behavior frequency, and duration. Continuous observation is useful for capturing behaviors that are of short duration or occur infrequently. However, collecting continuous data is labor intensive and time consuming, making multiple individual or long-term data collection difficult. Six non-cage laying hens were video recorded for 15 h and behavioral data collected every 2 s were compared with data collected using scan sampling intervals of 5, 10, 15, 30, and 60 min and subsamples of 2 second observations performed for 10 min every 30 min, 15 min every 1 h, 30 min every 1.5 h, and 15 min every 2 h. Three statistical approaches were used to provide a comprehensive analysis to examine the quality of the data obtained via different sampling methods. General linear mixed models identified how the time budget from the sampling techniques differed from continuous observation. Correlation analysis identified how strongly results from the sampling techniques were associated with those from continuous observation. Regression analysis identified how well the results from the sampling techniques were associated with those from continuous observation, changes in magnitude, and whether a sampling technique had bias. Static behaviors were well represented with scan and time sampling techniques, while dynamic behaviors were best represented with time sampling techniques. Methods for identifying an appropriate sampling strategy based upon the type of behavior of interest are outlined and results for non-caged laying hens are presented.

  10. When continuous observations just won't do: developing accurate and efficient sampling strategies for the laying hen.

    PubMed

    Daigle, Courtney L; Siegford, Janice M

    2014-03-01

    Continuous observation is the most accurate way to determine animals' actual time budget and can provide a 'gold standard' representation of resource use, behavior frequency, and duration. Continuous observation is useful for capturing behaviors that are of short duration or occur infrequently. However, collecting continuous data is labor intensive and time consuming, making multiple individual or long-term data collection difficult. Six non-cage laying hens were video recorded for 15 h and behavioral data collected every 2 s were compared with data collected using scan sampling intervals of 5, 10, 15, 30, and 60 min and subsamples of 2 second observations performed for 10 min every 30 min, 15 min every 1 h, 30 min every 1.5 h, and 15 min every 2 h. Three statistical approaches were used to provide a comprehensive analysis to examine the quality of the data obtained via different sampling methods. General linear mixed models identified how the time budget from the sampling techniques differed from continuous observation. Correlation analysis identified how strongly results from the sampling techniques were associated with those from continuous observation. Regression analysis identified how well the results from the sampling techniques were associated with those from continuous observation, changes in magnitude, and whether a sampling technique had bias. Static behaviors were well represented with scan and time sampling techniques, while dynamic behaviors were best represented with time sampling techniques. Methods for identifying an appropriate sampling strategy based upon the type of behavior of interest are outlined and results for non-caged laying hens are presented. PMID:24269639

  11. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  12. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  13. Towards a standard framework to describe behaviours in the common-sloth (Bradypus variegatus Schinz, 1825): novel interactions data observed in distinct fragments of the Atlantic forest, Brazil.

    PubMed

    Silva, S M; Clozato, C L; Moraes-Barros, N; Morgante, J S

    2013-08-01

    The common three-toed sloth is a widespread species, but the location and the observation of its individuals are greatly hindered by its biological features. Their camouflaged pelage, its slow and quiet movements, and the strictly arboreal habits resulted in the publication of sparse, fragmented and not patterned information on the common sloth behaviour. Thus, herein we propose an updated standardized behavioural categories' framework to the study of the species. Furthermore we describe two never reported interaction behaviours: a probable mating / courtship ritual between male and female; and apparent recognition behaviour between two males. Finally we highlight the contribution of small-duration field works in this elusive species ethological study.

  14. Towards a standard framework to describe behaviours in the common-sloth (Bradypus variegatus Schinz, 1825): novel interactions data observed in distinct fragments of the Atlantic forest, Brazil.

    PubMed

    Silva, S M; Clozato, C L; Moraes-Barros, N; Morgante, J S

    2013-08-01

    The common three-toed sloth is a widespread species, but the location and the observation of its individuals are greatly hindered by its biological features. Their camouflaged pelage, its slow and quiet movements, and the strictly arboreal habits resulted in the publication of sparse, fragmented and not patterned information on the common sloth behaviour. Thus, herein we propose an updated standardized behavioural categories' framework to the study of the species. Furthermore we describe two never reported interaction behaviours: a probable mating / courtship ritual between male and female; and apparent recognition behaviour between two males. Finally we highlight the contribution of small-duration field works in this elusive species ethological study. PMID:24212693

  15. Simple Waveforms, Simply Described

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2008-01-01

    Since the first Lazarus Project calculations, it has been frequently noted that binary black hole merger waveforms are 'simple.' In this talk we examine some of the simple features of coalescence and merger waveforms from a variety of binary configurations. We suggest an interpretation of the waveforms in terms of an implicit rotating source. This allows a coherent description, of both the inspiral waveforms, derivable from post-Newtonian(PN) calculations, and the numerically determined merger-ringdown. We focus particularly on similarities in the features of various Multipolar waveform components Generated by various systems. The late-time phase evolution of most L these waveform components are accurately described with a sinple analytic fit. We also discuss apparent relationships among phase and amplitude evolution. Taken together with PN information, the features we describe can provide an approximate analytic description full coalescence wavefoRms. complementary to other analytic waveforns approaches.

  16. CC/DFT Route toward Accurate Structures and Spectroscopic Features for Observed and Elusive Conformers of Flexible Molecules: Pyruvic Acid as a Case Study.

    PubMed

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Cimino, Paola; Penocchio, Emanuele; Puzzarini, Cristina

    2015-09-01

    The structures and relative stabilities as well as the rotational and vibrational spectra of the three low-energy conformers of pyruvic acid (PA) have been characterized using a state-of-the-art quantum-mechanical approach designed for flexible molecules. By making use of the available experimental rotational constants for several isotopologues of the most stable PA conformer, Tc-PA, the semiexperimental equilibrium structure has been derived. The latter provides a reference for the pure theoretical determination of the equilibrium geometries for all conformers, thus confirming for these structures an accuracy of 0.001 Å and 0.1 deg for bond lengths and angles, respectively. Highly accurate relative energies of all conformers (Tc-, Tt-, and Ct-PA) and of the transition states connecting them are provided along with the thermodynamic properties at low and high temperatures, thus leading to conformational enthalpies accurate to 1 kJ mol(-1). Concerning microwave spectroscopy, rotational constants accurate to about 20 MHz are provided for the Tt- and Ct-PA conformers, together with the computed centrifugal-distortion constants and dipole moments required to simulate their rotational spectra. For Ct-PA, vibrational frequencies in the mid-infrared region accurate to 10 cm(-1) are reported along with theoretical estimates for the transitions in the near-infrared range, and the corresponding infrared spectrum including fundamental transitions, overtones, and combination bands has been simulated. In addition to the new data described above, theoretical results for the Tc- and Tt-PA conformers are compared with all available experimental data to further confirm the accuracy of the hybrid coupled-cluster/density functional theory (CC/DFT) protocol applied in the present study. Finally, we discuss in detail the accuracy of computational models fully based on double-hybrid DFT functionals (mainly at the B2PLYP/aug-cc-pVTZ level) that avoid the use of very expensive CC

  17. CC/DFT Route toward Accurate Structures and Spectroscopic Features for Observed and Elusive Conformers of Flexible Molecules: Pyruvic Acid as a Case Study.

    PubMed

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Cimino, Paola; Penocchio, Emanuele; Puzzarini, Cristina

    2015-09-01

    The structures and relative stabilities as well as the rotational and vibrational spectra of the three low-energy conformers of pyruvic acid (PA) have been characterized using a state-of-the-art quantum-mechanical approach designed for flexible molecules. By making use of the available experimental rotational constants for several isotopologues of the most stable PA conformer, Tc-PA, the semiexperimental equilibrium structure has been derived. The latter provides a reference for the pure theoretical determination of the equilibrium geometries for all conformers, thus confirming for these structures an accuracy of 0.001 Å and 0.1 deg for bond lengths and angles, respectively. Highly accurate relative energies of all conformers (Tc-, Tt-, and Ct-PA) and of the transition states connecting them are provided along with the thermodynamic properties at low and high temperatures, thus leading to conformational enthalpies accurate to 1 kJ mol(-1). Concerning microwave spectroscopy, rotational constants accurate to about 20 MHz are provided for the Tt- and Ct-PA conformers, together with the computed centrifugal-distortion constants and dipole moments required to simulate their rotational spectra. For Ct-PA, vibrational frequencies in the mid-infrared region accurate to 10 cm(-1) are reported along with theoretical estimates for the transitions in the near-infrared range, and the corresponding infrared spectrum including fundamental transitions, overtones, and combination bands has been simulated. In addition to the new data described above, theoretical results for the Tc- and Tt-PA conformers are compared with all available experimental data to further confirm the accuracy of the hybrid coupled-cluster/density functional theory (CC/DFT) protocol applied in the present study. Finally, we discuss in detail the accuracy of computational models fully based on double-hybrid DFT functionals (mainly at the B2PLYP/aug-cc-pVTZ level) that avoid the use of very expensive CC

  18. Combined NMR-observation of cold denaturation in supercooled water and heat denaturation enables accurate measurement of deltaC(p) of protein unfolding.

    PubMed

    Szyperski, Thomas; Mills, Jeffrey L; Perl, Dieter; Balbach, Jochen

    2006-04-01

    Cold and heat denaturation of the double mutant Arg 3-->Glu/Leu 66-->Glu of cold shock protein Csp of Bacillus caldolyticus was monitored using 1D (1)H NMR spectroscopy in the temperature range from -12 degrees C in supercooled water up to +70 degrees C. The fraction of unfolded protein, f (u), was determined as a function of the temperature. The data characterizing the unfolding transitions could be consistently interpreted in the framework of two-state models: cold and heat denaturation temperatures were determined to be -11 degrees C and 39 degrees C, respectively. A joint fit to both cold and heat transition data enabled the accurate spectroscopic determination of the heat capacity difference between native and denatured state, DeltaC(p) of unfolding. The approach described in this letter, or a variant thereof, is generally applicable and promises to be of value for routine studies of protein folding.

  19. CLARREO Cornerstone of the Earth Observing System: Measuring Decadal Change Through Accurate Emitted Infrared and Reflected Solar Spectra and Radio Occultation

    NASA Technical Reports Server (NTRS)

    Sandford, Stephen P.

    2010-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is one of four Tier 1 missions recommended by the recent NRC Decadal Survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that are used to enable validated long-term climate projections that become the foundation for informed decisions on mitigation and adaptation policies that address the effects of climate change on society. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change observations that are sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. These same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. For the first time CLARREO will make highly accurate, global, SI-traceable decadal change observations sensitive to the most critical, but least understood, climate forcings, responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy levels are determined so that climate trend signals can be detected against a background of naturally occurring variability. Climate system natural variability therefore determines what level of accuracy is overkill, and what level is critical to obtain. In this sense, the CLARREO mission requirements are considered optimal from a science value perspective. The accuracy for decadal change traceability to SI standards includes uncertainties associated with instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in

  20. Accurate detection of spatio-temporal variability of plant phenology by using satellite-observed daily green-red vegetation index (GRVI) in Japan

    NASA Astrophysics Data System (ADS)

    Nagai, S.; Saitoh, T. M.; Nasahara, K. N.; Inoue, T.; Suzuki, R.

    2015-12-01

    To evaluate the spatio-temporal variability of biodiversity and ecosystem functioning and service in deciduous forests, accurate detection of the timing of plant phenology such as leaf-flushing, -coloring, and -falling is important from plot to continental scales. Here, (1) we detected the spatio-temporal variability in the timing of start (SGS) and end of growing season (EGS) in Japan from 2001 to 2014 by analyzing Terra and Aqua/MODIS satellite-observed daily green-red vegetation index (GRVI) with a 500-m spatial resolution. (2) We examined the characteristics of timing of SGS and EGS in deciduous forests along vertical (altitude) and horizontal (latitude) gradients and their sensitivity to air temperature. (3) We evaluated the relationship between the spatial distribution of leaf-coloring phenology derived from Landsat-8/OLI satellite-observed GRVI with a 30-m spatial resolution on 23 November 2014 and leaf-coloring information published on web sites in Kanagawa Prefecture, Japan. We found that (1) changes along the vertical and horizontal gradients in the timing of SGS tended to be larger than those of EGS; (2) the sensitivity of the timing of SGS to air temperature was much more than that of EGS; and (3) leaf-coloring information published on web sites covering multiple points was useful for verification of leaf-coloring phenology derived from satellite-observed GRVI in relation to the altitude gradient in mountainous regions.

  1. Describing Cognitive Structure.

    ERIC Educational Resources Information Center

    White, Richard T.

    This paper discusses questions pertinent to a definition of cognitive structure as the knowledge one possesses and the manner in which it is arranged, and considers how to select or devise methods of describing cognitive structure. The main purpose in describing cognitive structure is to see whether differences in memory (or cognitive structure)…

  2. Describe Your Favorite Teacher.

    ERIC Educational Resources Information Center

    Dill, Isaac; Dill, Vicky

    1993-01-01

    A third grader describes Ms. Gonzalez, his favorite teacher, who left to accept a more lucrative teaching assignment. Ms. Gonzalez' butterflies unit covered everything from songs about social butterflies to paintings of butterfly wings, anatomy studies, and student haiku poems and biographies. Students studied biology by growing popcorn plants…

  3. New described dermatological disorders.

    PubMed

    Gönül, Müzeyyen; Cevirgen Cemil, Bengu; Keseroglu, Havva Ozge; Kaya Akis, Havva

    2014-01-01

    Many advances in dermatology have been made in recent years. In the present review article, newly described disorders from the last six years are presented in detail. We divided these reports into different sections, including syndromes, autoinflammatory diseases, tumors, and unclassified disease. Syndromes included are "circumferential skin creases Kunze type" and "unusual type of pachyonychia congenita or a new syndrome"; autoinflammatory diseases include "chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature (CANDLE) syndrome," "pyoderma gangrenosum, acne, and hidradenitis suppurativa (PASH) syndrome," and "pyogenic arthritis, pyoderma gangrenosum, acne, and hidradenitis suppurativa (PAPASH) syndrome"; tumors include "acquired reactive digital fibroma," "onychocytic matricoma and onychocytic carcinoma," "infundibulocystic nail bed squamous cell carcinoma," and "acral histiocytic nodules"; unclassified disorders include "saurian papulosis," "symmetrical acrokeratoderma," "confetti-like macular atrophy," and "skin spicules," "erythema papulosa semicircularis recidivans." PMID:25243162

  4. New Described Dermatological Disorders

    PubMed Central

    Cevirgen Cemil, Bengu; Keseroglu, Havva Ozge; Kaya Akis, Havva

    2014-01-01

    Many advances in dermatology have been made in recent years. In the present review article, newly described disorders from the last six years are presented in detail. We divided these reports into different sections, including syndromes, autoinflammatory diseases, tumors, and unclassified disease. Syndromes included are “circumferential skin creases Kunze type” and “unusual type of pachyonychia congenita or a new syndrome”; autoinflammatory diseases include “chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature (CANDLE) syndrome,” “pyoderma gangrenosum, acne, and hidradenitis suppurativa (PASH) syndrome,” and “pyogenic arthritis, pyoderma gangrenosum, acne, and hidradenitis suppurativa (PAPASH) syndrome”; tumors include “acquired reactive digital fibroma,” “onychocytic matricoma and onychocytic carcinoma,” “infundibulocystic nail bed squamous cell carcinoma,” and “acral histiocytic nodules”; unclassified disorders include “saurian papulosis,” “symmetrical acrokeratoderma,” “confetti-like macular atrophy,” and “skin spicules,” “erythema papulosa semicircularis recidivans.” PMID:25243162

  5. 3D models of slow motions in the Earth's crust and upper mantle in the source zones of seismically active regions and their comparison with highly accurate observational data: II. Results of numerical calculations

    NASA Astrophysics Data System (ADS)

    Molodenskii, S. M.; Molodenskii, M. S.; Begitova, T. A.

    2016-09-01

    In the first part of the paper, a new method was developed for solving the inverse problem of coseismic and postseismic deformations in the real (imperfectly elastic, radially and horizontally heterogeneous, self-gravitating) Earth with hydrostatic initial stresses from highly accurate modern satellite data. The method is based on the decomposition of the sought parameters in the orthogonalized basis. The method was suggested for estimating the ambiguity of the solution of the inverse problem for coseismic and postseismic deformations. For obtaining this estimate, the orthogonal complement is constructed to the n-dimensional space spanned by the system of functional derivatives of the residuals in the system of n observed and model data on the coseismic and postseismic displacements at a variety of sites on the ground surface with small variations in the models. Below, we present the results of the numerical modeling of the elastic displacements of the ground surface, which were based on calculating Green's functions of the real Earth for the plane dislocation surface and different orientations of the displacement vector as described in part I of the paper. The calculations were conducted for the model of a horizontally homogeneous but radially heterogeneous selfgravitating Earth with hydrostatic initial stresses and the mantle rheology described by the Lomnitz logarithmic creep function according to (M. Molodenskii, 2014). We compare our results with the previous numerical calculations (Okado, 1985; 1992) for the simplest model of a perfectly elastic nongravitating homogeneous Earth. It is shown that with the source depths starting from the first hundreds of kilometers and with magnitudes of about 8.0 and higher, the discrepancies significantly exceed the errors of the observations and should therefore be taken into account. We present the examples of the numerical calculations of the creep function of the crust and upper mantle for the coseismic deformations. We

  6. Using neural networks to describe tracer correlations

    NASA Astrophysics Data System (ADS)

    Lary, D. J.; Müller, M. D.; Mussa, H. Y.

    2003-11-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural 5 network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co-efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the 10 dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4 (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download

  7. Using neural networks to describe tracer correlations

    NASA Astrophysics Data System (ADS)

    Lary, D. J.; Müller, M. D.; Mussa, H. Y.

    2004-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4 (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  8. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  9. Some properties of negative cloud-to-ground flashes from observations of a local thunderstorm based on accurate-stroke-count studies

    NASA Astrophysics Data System (ADS)

    Zhu, Baoyou; Ma, Ming; Xu, Weiwei; Ma, Dong

    2015-12-01

    Properties of negative cloud-to-ground (CG) lightning flashes, in terms of number of strokes per flash, inter-stroke intervals and the relative intensity of subsequent and first strokes, were presented by accurate-stroke-count studies based on all 1085 negative flashes from a local thunderstorm. The percentage of single-stroke flashes and stroke multiplicity evolved significantly during the whole life cycle of the study thunderstorm. The occurrence probability of negative CG flashes decreased exponentially with the increasing number of strokes per flash. About 30.5% of negative CG flashes contained only one stroke and number of strokes per flash averaged 3.3. In a subset of 753 negative multiple-stroke flashes, about 41.4% contained at least one subsequent stroke stronger than the corresponding first stroke. Subsequent strokes tended to decrease in strength with their orders and the ratio of subsequent to first stroke peaks presented a geometric mean value of 0.52. Interestingly, negative CG flashes of higher multiplicity tended to have stronger initial strokes. 2525 inter-stroke intervals showed a more or less log-normal distribution and gave a geometric mean value of 62 ms. For CG flashes of particular multiplicity geometric mean inter-stroke intervals tended to decrease with the increasing number of strokes per flash, while those intervals associated with higher order strokes tended to be larger than those associated with low order strokes.

  10. Masses of the components of SB2 binaries observed with Gaia - III. Accurate SB2 orbits for 10 binaries and masses of HIP 87895

    NASA Astrophysics Data System (ADS)

    Kiefer, F.; Halbwachs, J.-L.; Arenou, F.; Pourbaix, D.; Famaey, B.; Guillout, P.; Lebreton, Y.; Nebot Gómez-Morán, A.; Mazeh, T.; Salomon, J.-B.; Soubiran, C.; Tal-Or, L.

    2016-05-01

    In anticipation of the Gaia astrometric mission, a large sample of spectroscopic binaries has been observed since 2010 with the Spectrographe pour l'Observation des PHénomènes des Intérieurs Stellaires et des Exoplanètes spectrograph at the Haute-Provence Observatory. Our aim is to derive the orbital elements of double-lined spectroscopic binaries (SB2s) with an accuracy sufficient to finally obtain the masses of the components with relative errors as small as 1 per cent when the astrometric measurements of Gaia are taken into account. In this paper, we present the results from five years of observations of 10 SB2 systems with periods ranging from 37 to 881 d. Using the TODMOR algorithm, we computed radial velocities from the spectra, and then derived the orbital elements of these binary systems. The minimum masses of the components are then obtained with an accuracy better than 1.2 per cent for the 10 binaries. Combining the radial velocities with existing interferometric measurements, we derived the masses of the primary and secondary components of HIP 87895 with an accuracy of 0.98 and 1.2 per cent, respectively.

  11. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  12. Accomplishments of the MUSICA project to provide accurate, long-term, global and high-resolution observations of tropospheric {H2O,δD} pairs - a review

    NASA Astrophysics Data System (ADS)

    Schneider, Matthias; Wiegele, Andreas; Barthlott, Sabine; González, Yenny; Christner, Emanuel; Dyroff, Christoph; García, Omaira E.; Hase, Frank; Blumenstock, Thomas; Sepúlveda, Eliezer; Mengistu Tsidu, Gizaw; Takele Kenea, Samuel; Rodríguez, Sergio; Andrey, Javier

    2016-07-01

    In the lower/middle troposphere, {H2O,δD} pairs are good proxies for moisture pathways; however, their observation, in particular when using remote sensing techniques, is challenging. The project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water) addresses this challenge by integrating the remote sensing with in situ measurement techniques. The aim is to retrieve calibrated tropospheric {H2O,δD} pairs from the middle infrared spectra measured from ground by FTIR (Fourier transform infrared) spectrometers of the NDACC (Network for the Detection of Atmospheric Composition Change) and the thermal nadir spectra measured by IASI (Infrared Atmospheric Sounding Interferometer) aboard the MetOp satellites. In this paper, we present the final MUSICA products, and discuss the characteristics and potential of the NDACC/FTIR and MetOp/IASI {H2O,δD} data pairs. First, we briefly resume the particularities of an {H2O,δD} pair retrieval. Second, we show that the remote sensing data of the final product version are absolutely calibrated with respect to H2O and δD in situ profile references measured in the subtropics, between 0 and 7 km. Third, we reveal that the {H2O,δD} pair distributions obtained from the different remote sensors are consistent and allow distinct lower/middle tropospheric moisture pathways to be identified in agreement with multi-year in situ references. Fourth, we document the possibilities of the NDACC/FTIR instruments for climatological studies (due to long-term monitoring) and of the MetOp/IASI sensors for observing diurnal signals on a quasi-global scale and with high horizontal resolution. Fifth, we discuss the risk of misinterpreting {H2O,δD} pair distributions due to incomplete processing of the remote sensing products.

  13. Model describes subsea control dynamics

    SciTech Connect

    Not Available

    1988-02-01

    A mathematical model of the hydraulic control systems for subsea completions and their umbilicals has been developed and applied successfully to Jabiru and Challis field production projects in the Timor Sea. The model overcomes the limitations of conventional linear steady state models and yields for the hydraulic system an accurate description of its dynamic response, including the valve shut-in times and the pressure transients. Results of numerical simulations based on the model are in good agreement with measurements of the dynamic response of the tree valves and umbilicals made during land testing.

  14. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  15. A Fibre-Reinforced Poroviscoelastic Model Accurately Describes the Biomechanical Behaviour of the Rat Achilles Tendon

    PubMed Central

    Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna

    2015-01-01

    Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon’s viscoelastic response. In conclusion, this model can capture the repetitive loading and unloading behaviour of intact and healthy Achilles tendons, which is a critical first step towards understanding tendon homeostasis and function as this biomechanical response changes in diseased tendons. PMID:26030436

  16. How to describe disordered structures

    PubMed Central

    Nishio, Kengo; Miyazaki, Takehide

    2016-01-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way. PMID:27064833

  17. How to describe disordered structures.

    PubMed

    Nishio, Kengo; Miyazaki, Takehide

    2016-01-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way. PMID:27064833

  18. Describing ethnicity in health research.

    PubMed

    Bradby, Hannah

    2003-02-01

    Commentators have criticised the terminology used for the classification of ethnic and racialised groups in health research for a number of years. The shortcomings of fixed-response categories include the reproduction of racialised categorisations, overemphasis of homogeneity within groups and contrast between them, and failure to offer terms with which people identify and which can express complex identities. The historical injustices against black and minority groups are reflected in terminology and explicitly recognised when discussing 'race' as a social construction. The exaggeration of homogeneity within groups and contrast between them is a racialising effect of fixed classifications. Self-assigned ethnic group avoids some of these difficulties by allowing multiple affiliations to be described, but introduces the costs of processing free text. The context-dependent nature of individual ethnic identity makes comparison problematic. Researcher-assigned ethnicity can increase comparability and consistency but may be at odds with self-identity. The complexity of ethnicity itself and of its relationship with socio-economic group and racism makes proxy measures inevitably inadequate. If researchers continue to try to capture the complex and contextual detail of ethnicity, it may become clear that the general concept of ethnicity covers such a wide and specific range of experiences as to render it of limited use in making comparisons through time or across cultures.

  19. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  20. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  1. Quantum formalism to describe binocular rivalry.

    PubMed

    Manousakis, Efstratios

    2009-11-01

    On the basis of the general character and operation of the process of perception, a formalism is sought to mathematically describe the subjective or abstract/mental process of perception. It is shown that the formalism of orthodox quantum theory of measurement, where the observer plays a key role, is a broader mathematical foundation which can be adopted to describe the dynamics of the subjective experience. The mathematical formalism describes the psychophysical dynamics of the subjective or cognitive experience as communicated to us by the subject. Subsequently, the formalism is used to describe simple perception processes and, in particular, to describe the probability distribution of dominance duration obtained from the testimony of subjects experiencing binocular rivalry. Using this theory and parameters based on known values of neuronal oscillation frequencies and firing rates, the calculated probability distribution of dominance duration of rival states in binocular rivalry under various conditions is found to be in good agreement with available experimental data. This theory naturally explains an observed marked increase in dominance duration in binocular rivalry upon periodic interruption of stimulus and yields testable predictions for the distribution of perceptual alteration in time. PMID:19520143

  2. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  3. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  4. 3D models of slow motions in the Earth's crust and upper mantle in the source zones of seismically active regions and their comparison with highly accurate observational data: I. Main relationships

    NASA Astrophysics Data System (ADS)

    Molodenskii, S. M.; Molodenskii, M. S.; Begitova, T. A.

    2016-09-01

    Constructing detailed models for postseismic and coseismic deformations of the Earth's surface has become particularly important because of the recently established possibility to continuously monitor the tectonic stresses in the source zones based on the data on the time variations in the tidal tilt amplitudes. Below, a new method is suggested for solving the inverse problem about the coseismic and postseismic deformations in the real non-ideally elastic, radially and horizontally heterogeneous, self-gravitating Earth with a hydrostatic distribution of the initial stresses from the satellite data on the ground surface displacements. The solution of this problem is based on decomposing the parameters determining the geometry of the fault surface and the distribution of the dislocation vector on this surface and elastic modules in the source in the orthogonal bases. The suggested approach includes four steps: 1. Calculating (by the perturbation method) the variations in Green's function for the radial and tangential ground surface displacements with small 3D variations in the mechanical parameters and geometry of the source area (i.e., calculating the functional derivatives of the three components of Green's function on the surface from the distributions of the elastic moduli and creep function within the volume of the source area and Burgers' vector on the surface of the dislocations); 2. Successive orthogonalization of the functional derivatives; 3. Passing from the decompositions of the residuals between the observed and modeled surface displacements in the system of nonorthogonalized functional derivatives to their decomposition in the system of orthogonalized derivatives; finding the corrections to the distributions of the sought parameters from the coefficients of their decompositions in the orthogonalized basis; and 4. Analyzing the ambiguity of the inverse problem solution by constructing the orthogonal complement to the obtained basis. The described

  5. [Who really first described lesser blood circulation?].

    PubMed

    Masić, Izet; Dilić, Mirza

    2007-01-01

    Today, at least 740 years since professor and director of the Al Mansouri Hospital in Cairo Ibn al-Nafis (1210-1288), in his paper about pulse described small (pulmonary) blood circulatory system. At the most popular web search engines very often we can find its name, especially in English language. Majority of quotes about Ibn Nefis are on Arabic or Turkish language, although Ibn Nefis discovery is of world wide importance. Author Masić I. (1993) is among rare ones who in some of the indexed journals emphasized of that event, and on that debated also some authors from Great Britain and USA in the respectable magazine Annals of Internal Medicine. Citations in majority mentioning other two "describers" or "discoverers" of pulmonary blood circulation, Michael Servetus (1511-1553), physician and theologist, and William Harvey (1578-1657), which in his paper "Exercitatio anatomica de motu cordis et sanguinis in animalibus" published in 1628 described blood circulatory system. Ibn Nefis is due to its scientific work called "Second Avicenna". Some of his papers, during centuries were translated into Latin, and some published as a reprint in Arabic language. Professor Fuat Sezgin from Frankfurt published a compendium of Ibn Nefis papers in 1997. Also, Masić I. (1997) has published one monography about Ibn Nefis. Importance of Ibn Nefis epochal discovery is the fact that it is solely based on deductive impressions, because his description of the small circulation is not occurred by observation on corps during section. It is known that he did not pay attention to the Galen's theories about blood circulation. His prophecy sentence say: "If I don't know that my work will not last up to ten thousand years after me, I would not write them". Sapient sat.

  6. Accurate and occlusion-robust multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  7. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  8. Cellular automata to describe seismicity: A review

    NASA Astrophysics Data System (ADS)

    Jiménez, Abigail

    2013-12-01

    Cellular Automata have been used in the literature to describe seismicity. We first historically introduce Cellular Automata and provide some important definitions. Then we proceed to review the most important models, most of them being variations of the spring-block model proposed by Burridge and Knopoff, and describe the most important results obtained from them. We discuss the relation with criticality and also describe some models that try to reproduce real data.

  9. Systematically describing gross lesions in corals

    USGS Publications Warehouse

    Work, T.; Aeby, G.

    2006-01-01

    Many coral diseases are characterized based on gross descriptions and, given the lack or difficulty of applying existing laboratory tools to understanding causes of coral diseases, most new diseases will continued to be described based on appearance in the field. Unfortunately, many existing descriptions of coral disease are ambiguous or open to subjective interpretation, making comparisons between oceans problematic. One reason for this is that the process of describing lesions is often confused with that of assigning causality for the lesion. However, causality is usually something not obtained in the field and requires additional laboratory tests. Because a concise and objective morphologic description provides the foundation for a case definition of any disease, there is a need for a consistent and standardized process to describe lesions of corals that focuses on morphology. We provide a framework to systematically describe and name diseases in corals involving 4 steps: (1) naming the disease, (2) describing the lesion, (3) formulating a morphologic diagnosis and (4) formulating an etiologic diagnosis. This process focuses field investigators on describing what they see and separates the process of describing a lesion from that of inferring causality, the latter being more appropriately done using laboratory techniques.

  10. A two-parameter kinetic model based on a time-dependent activity coefficient accurately describes enzymatic cellulose digestion

    PubMed Central

    Kostylev, Maxim; Wilson, David

    2014-01-01

    Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567

  11. Children describe life after Hurricane Andrew.

    PubMed

    Coffman, S

    1994-01-01

    Hurricane Andrew, which devastated the south Florida coast in August 1992, left over 250,000 people homeless with multiple health and social problems. This nursing study explored the experiences of 17 children, ages 5 through 12, who lived in the geographic area of storm damage. Common experiences described by the children included remembering the storm, dealing with after-effects, and reestablishing a new life. In general, children described a sense of strangeness, articulated as "life is weird" after the hurricane. In addition to stressful responses, many positive reactions were described by children in the study, revealing that the disaster also had a maturing effect.

  12. Venus general atmosphere circulation described by Pioneer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The predominant weather pattern for Venus is described. Wind directions and wind velocities are given. Possible driving forces of the winds are presented and include solar heating, planetary rotation, and the greenhouse effect.

  13. Did goethe describe attention deficit hyperactivity disorder?

    PubMed

    Bonazza, Sara; Scaglione, Cesa; Poppi, Massimo; Rizzo, Giovanni

    2011-01-01

    As early as 1846, the typical symptoms of attention deficit hyperactivity disorder (ADHD) were described by Heinrich Hoffmann (1809-1894). However, in Goethe's masterpiece Faust (1832), the character of Euphorion strongly suggests ADHD diagnosis.

  14. Describing content in middle school science curricula

    NASA Astrophysics Data System (ADS)

    Schwarz-Ballard, Jennifer A.

    As researchers and designers, we intuitively recognize differences between curricula and describe them in terms of design strategy: project-based, laboratory-based, modular, traditional, and textbook, among others. We assume that practitioners recognize the differences in how each requires that students use knowledge, however these intuitive differences have not been captured or systematically described by the existing languages for describing learning goals. In this dissertation I argue that we need new ways of capturing relationships among elements of content, and propose a theory that describes some of the important differences in how students reason in differently designed curricula and activities. Educational researchers and curriculum designers have taken a variety of approaches to laying out learning goals for science. Through an analysis of existing descriptions of learning goals I argue that to describe differences in the understanding students come away with, they need to (1) be specific about the form of knowledge, (2) incorporate both the processes through which knowledge is used and its form, and (3) capture content development across a curriculum. To show the value of inquiry curricula, learning goals need to incorporate distinctions among the variety of ways we ask students to use knowledge. Here I propose the Epistemic Structures Framework as one way to describe differences in students reasoning that are not captured by existing descriptions of learning goals. The usefulness of the Epistemic Structures framework is demonstrated in the four curriculum case study examples in Part II of this work. The curricula in the case studies represent a range of content coverage, curriculum structure, and design rationale. They serve both to illustrate the Epistemic Structures analysis process and make the case that it does in fact describe learning goals in a way that captures important differences in students reasoning in differently designed curricula

  15. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  16. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  17. Audio-Described Educational Materials: Ugandan Teachers' Experiences

    ERIC Educational Resources Information Center

    Wormnaes, Siri; Sellaeg, Nina

    2013-01-01

    This article describes and discusses a qualitative, descriptive, and exploratory study of how 12 visually impaired teachers in Uganda experienced audio-described educational video material for teachers and student teachers. The study is based upon interviews with these teachers and observations while they were using the material either…

  18. Sensorimotor Interference When Reasoning About Described Environments

    NASA Astrophysics Data System (ADS)

    Avraamides, Marios N.; Kyranidou, Melina-Nicole

    The influence of sensorimotor interference was examined in two experiments that compared pointing with iconic arrows and verbal responding in a task that entailed locating target-objects from imagined perspectives. Participants studied text narratives describing objects at locations around them in a remote environment and then responded to targets from memory. Results revealed only minor differences between the two response modes suggesting that bodily cues do not exert severe detrimental interference on spatial reasoning from imagined perspective when non-immediate described environments are used. The implications of the findings are discussed.

  19. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  20. Describing a Performance Improvement Specialist: The Heurist.

    ERIC Educational Resources Information Center

    Westgaard, Odin

    1997-01-01

    Describes the work of performance improvement specialists and presents a method for determining whether a particular person or position meets the job criteria. Discusses the attributes of being a heurist, or taking a holistic approach to problem solving. Lists 10 steps for a needs assessment and 30 characteristics of successful performance…

  1. How Digital Native Learners Describe Themselves

    ERIC Educational Resources Information Center

    Thompson, Penny

    2015-01-01

    Eight university students from the "digital native" generation were interviewed about the connections they saw between technology use and learning, and also their reactions to the popular press claims about their generation. Themes that emerged from the interviews were coded to show patterns in how digital natives describe themselves.…

  2. Describing Technological Paradigm Transitions: A Methodological Exploration.

    ERIC Educational Resources Information Center

    Wallace, Danny P.; Van Fleet, Connie

    1997-01-01

    Presents a humorous treatment of the "sessio taurino" (or humanistic inquiry) technique for describing changes in technological models. The fundamental tool of "sessio taurino" is a loosely-structured event known as the session, which is of indeterminate length, involves a flexible number of participants, and utilizes a preundetermined set of…

  3. Attributes of Images in Describing Tasks.

    ERIC Educational Resources Information Center

    Jorgensen, Corinne

    1998-01-01

    Report on exploratory research which investigated image attributes in a series of describing tasks. Results suggest that access to a wide range of attributes is needed to address all facets of interest and that certain classes of attributes may appear more frequently (literal objects, human form and associated attributes, color, and location).…

  4. USING TRACERS TO DESCRIBE NAPL HETEROGENEITY

    EPA Science Inventory

    Tracers are frequently used to estimate both the average travel time for water flow through the tracer swept volume and NAPL saturation. The same data can be used to develop a statistical distribution describing the hydraulic conductivity in the sept volume and a possible distri...

  5. Is the Water Heating Curve as Described?

    ERIC Educational Resources Information Center

    Riveros, H. G.; Oliva, A. I.

    2008-01-01

    We analysed the heating curve of water which is described in textbooks. An experiment combined with some simple heat transfer calculations is discussed. The theoretical behaviour can be altered by changing the conditions under which the experiment is modelled. By identifying and controlling the different parameters involved during the heating…

  6. A Dualistic Model To Describe Computer Architectures

    NASA Astrophysics Data System (ADS)

    Nitezki, Peter; Engel, Michael

    1985-07-01

    The Dualistic Model for Computer Architecture Description uses a hierarchy of abstraction levels to describe a computer in arbitrary steps of refinement from the top of the user interface to the bottom of the gate level. In our Dualistic Model the description of an architecture may be divided into two major parts called "Concept" and "Realization". The Concept of an architecture on each level of the hierarchy is an Abstract Data Type that describes the functionality of the computer and an implementation of that data type relative to the data type of the next lower level of abstraction. The Realization on each level comprises a language describing the means of user interaction with the machine, and a processor interpreting this language in terms of the language of the lower level. The surface of each hierarchical level, the data type and the language express the behaviour of a ma-chine at this level, whereas the implementation and the processor describe the structure of the algorithms and the system. In this model the Principle of Operation maps the object and computational structure of the Concept onto the structures of the Realization. Describing a system in terms of the Dualistic Model is therefore a process of refinement starting at a mere description of behaviour and ending at a description of structure. This model has proven to be a very valuable tool in exploiting the parallelism in a problem and it is very transparent in discovering the points where par-allelism is lost in a special architecture. It has successfully been used in a project on a survey of Computer Architecture for Image Processing and Pattern Analysis in Germany.

  7. CANDLE syndrome: a recently described autoinflammatory syndrome.

    PubMed

    Tüfekçi, Özlem; Bengoa, ŞebnemYilmaz; Karapinar, Tuba Hilkay; Ataseven, Eda Büke; İrken, Gülersu; Ören, Hale

    2015-05-01

    CANDLE syndrome (chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature) is a recently described autoinflammatory syndrome characterized by early onset, recurrent fever, skin lesions, and multisystemic inflammatory manifestations. Most of the patients have been shown to have mutation in PSMB8 gene. Herein, we report a 2-year-old patient with young onset recurrent fever, atypical facies, widespread skin lesions, generalized lymphadenopathy, hepatosplenomegaly, joint contractures, hypertrglyceridemia, lipodystrophy, and autoimmune hemolytic anemia. Clinical features together with the skin biopsy findings were consistent with the CANDLE syndrome. The pathogenesis and treatment of this syndrome have not been fully understood. Increased awareness of this recently described syndrome may lead to recognition of new cases and better understanding of its pathogenesis which in turn may help for development of an effective treatment. PMID:25036278

  8. Generating and Describing Affective Eye Behaviors

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Zheng

    The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.

  9. Commentary: describing differences--possibilities and pitfalls.

    PubMed

    Friend, Annette

    2008-01-01

    Reports of attempts to investigate, characterize, compare, and contrast those who are mentally ill fill the literature and invite controversy. It seems to be part of human nature to reestablish and define the differences between us. Creative descriptive studies continually challenge our perspective, yet they must be balanced with thoughtful consideration of possible selection bias, an understanding of how a perspective may influence a particular view, and an appreciation of statistical constraints, before describing differences as predictive risk factors.

  10. LiveDescribe: Can Amateur Describers Create High-Quality Audio Description?

    ERIC Educational Resources Information Center

    Branje, Carmen J.; Fels, Deborah I.

    2012-01-01

    Introduction: The study presented here evaluated the usability of the audio description software LiveDescribe and explored the acceptance rates of audio description created by amateur describers who used LiveDescribe to facilitate the creation of their descriptions. Methods: Twelve amateur describers with little or no previous experience with…

  11. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  12. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  13. Describing response-event relations: Babel revisited

    PubMed Central

    Lattal, Kennon A.; Poling, Alan D.

    1981-01-01

    The terms used to describe the relations among the three components of contingencies of reinforcement and punishment include many with multiple meanings and imprecise denotation. In particular, usage of the term “contingency” and its variants and acceptance of unsubstantiated functional, rather than procedural, descriptions of response-event relations are especially troublesome in the behavior analysis literature. Clarity seems best served by restricting the term “contingency” to its generic usage and by utilizing procedural descriptions of response-event relations. PMID:22478546

  14. Young women describe the ideal physician.

    PubMed

    Clowers, Marsha

    2002-01-01

    For some, the search for the ideal care provider can be elusive. This study explored female adolescents' accounts of the ideal health care provider. One hundred fifty-seven female high school students responded to the following question: "Can you describe what the ideal doctor would be like?" Content analysis of their descriptive narratives yielded 272 references to communication competence versus 30 references to medical competence (10 references were unrelated to either communication or medical competence). Based on their responses, it is clear that while young women appreciate the importance of medical skill, it is the communicatively competent care provider that they most seek.

  15. Is an eclipse described in the Odyssey?

    PubMed

    Baikouzis, Constantino; Magnasco, Marcelo O

    2008-07-01

    Plutarch and Heraclitus believed a certain passage in the 20th book of the Odyssey ("Theoclymenus's prophecy") to be a poetic description of a total solar eclipse. In the late 1920s, Schoch and Neugebauer computed that the solar eclipse of 16 April 1178 B.C.E. was total over the Ionian Islands and was the only suitable eclipse in more than a century to agree with classical estimates of the decade-earlier sack of Troy around 1192-1184 B.C.E. However, much skepticism remains about whether the verses refer to this, or any, eclipse. To contribute to the issue independently of the disputed eclipse reference, we analyze other astronomical references in the Epic, without assuming the existence of an eclipse, and search for dates matching the astronomical phenomena we believe they describe. We use three overt astronomical references in the epic: to Boötes and the Pleiades, Venus, and the New Moon; we supplement them with a conjectural identification of Hermes's trip to Ogygia as relating to the motion of planet Mercury. Performing an exhaustive search of all possible dates in the span 1250-1115 B.C., we looked to match these phenomena in the order and manner that the text describes. In that period, a single date closely matches our references: 16 April 1178 B.C.E. We speculate that these references, plus the disputed eclipse reference, may refer to that specific eclipse. PMID:18577587

  16. Is an eclipse described in the Odyssey?

    PubMed Central

    Baikouzis, Constantino; Magnasco, Marcelo O.

    2008-01-01

    Plutarch and Heraclitus believed a certain passage in the 20th book of the Odyssey (“Theoclymenus's prophecy”) to be a poetic description of a total solar eclipse. In the late 1920s, Schoch and Neugebauer computed that the solar eclipse of 16 April 1178 B.C.E. was total over the Ionian Islands and was the only suitable eclipse in more than a century to agree with classical estimates of the decade-earlier sack of Troy around 1192–1184 B.C.E. However, much skepticism remains about whether the verses refer to this, or any, eclipse. To contribute to the issue independently of the disputed eclipse reference, we analyze other astronomical references in the Epic, without assuming the existence of an eclipse, and search for dates matching the astronomical phenomena we believe they describe. We use three overt astronomical references in the epic: to Boötes and the Pleiades, Venus, and the New Moon; we supplement them with a conjectural identification of Hermes's trip to Ogygia as relating to the motion of planet Mercury. Performing an exhaustive search of all possible dates in the span 1250–1115 B.C., we looked to match these phenomena in the order and manner that the text describes. In that period, a single date closely matches our references: 16 April 1178 B.C.E. We speculate that these references, plus the disputed eclipse reference, may refer to that specific eclipse. PMID:18577587

  17. Stimulated recall interviews for describing pragmatic epistemology

    NASA Astrophysics Data System (ADS)

    Shubert, Christopher W.; Meredith, Dawn C.

    2015-12-01

    Students' epistemologies affect how and what they learn: do they believe physics is a list of equations, or a coherent and sensible description of the physical world? In order to study these epistemologies as part of curricular assessment, we adopt the resources framework, which posits that students have many productive epistemological resources that can be brought to bear as they learn physics. In previous studies, these epistemologies have been either inferred from behavior in learning contexts or probed through surveys or interviews outside of the learning context. We argue that stimulated recall interviews provide a contextually and interpretively valid method to access students' epistemologies that complement existing methods. We develop a stimulated recall interview methodology to assess a curricular intervention and find evidence that epistemological resources aptly describe student epistemologies.

  18. Describing Story Evolution from Dynamic Information Streams

    SciTech Connect

    Rose, Stuart J.; Butner, R. Scott; Cowley, Wendy E.; Gregory, Michelle L.; Walker, Julia

    2009-10-12

    Sources of streaming information, such as news syndicates, publish information continuously. Information portals and news aggregators list the latest information from around the world enabling information consumers to easily identify events in the past 24 hours. The volume and velocity of these streams causes information from prior days’ to quickly vanish despite its utility in providing an informative context for interpreting new information. Few capabilities exist to support an individual attempting to identify or understand trends and changes from streaming information over time. The burden of retaining prior information and integrating with the new is left to the skills, determination, and discipline of each individual. In this paper we present a visual analytics system for linking essential content from information streams over time into dynamic stories that develop and change over multiple days. We describe particular challenges to the analysis of streaming information and explore visual representations for showing story change and evolution over time.

  19. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  20. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  1. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  2. Using Metaphorical Models for Describing Glaciers

    NASA Astrophysics Data System (ADS)

    Felzmann, Dirk

    2014-11-01

    To date, there has only been little conceptual change research regarding conceptions about glaciers. This study used the theoretical background of embodied cognition to reconstruct different metaphorical concepts with respect to the structure of a glacier. Applying the Model of Educational Reconstruction, the conceptions of students and scientists regarding glaciers were analysed. Students' conceptions were the result of teaching experiments whereby students received instruction about glaciers and ice ages and were then interviewed about their understandings. Scientists' conceptions were based on analyses of textbooks. Accordingly, four conceptual metaphors regarding the concept of a glacier were reconstructed: a glacier is a body of ice; a glacier is a container; a glacier is a reflexive body and a glacier is a flow. Students and scientists differ with respect to in which context they apply each conceptual metaphor. It was observed, however, that students vacillate among the various conceptual metaphors as they solve tasks. While the subject context of the task activates a specific conceptual metaphor, within the discussion about the solution, the students were able to adapt their conception by changing the conceptual metaphor. Educational strategies for teaching students about glaciers require specific language to activate the appropriate conceptual metaphors and explicit reflection regarding the various conceptual metaphors.

  3. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  4. Describing dengue epidemics: Insights from simple mechanistic models

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2012-09-01

    We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.

  5. New method for describing the performance of cardiac surgery cannulas.

    PubMed

    Delius, R E; Montoya, J P; Merz, S I; McKenzie, J; Snedecor, S; Bove, E L; Bartlett, R H

    1992-02-01

    Cardiac surgery cannulas are characterized by external diameter only, which provides little information about the pressure-flow characteristics of a cannula. A system has been developed to describe pressure-flow characteristics with a single, unitless number, M, which is patterned after a Reynolds friction factor correlation. A cannula with a lower M number has a more favorable pressure-flow relationship. The M number was determined for 16 arterials cannulas ranging in size from 10F to 26F and 27 venous cannulas sized 12F to 36F. Pressure-flow characteristics vary considerably among cannulas from different manufacturers despite having similar French sizes. Clinical decisions regarding choice of cannula can be simplified by using the M number, which gives a more accurate description of the performance characteristics of a cannula than the French size designation.

  6. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  7. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  8. TURTLE IN SPACE DESCRIBES NEW HUBBLE IMAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    NASA's Hubble Space Telescope has shown us that the shrouds of gas surrounding dying, sunlike stars (called planetary nebulae) come in a variety of strange shapes, from an 'hourglass' to a 'butterfly' to a 'stingray.' With this image of NGC 6210, the Hubble telescope has added another bizarre form to the rogues' gallery of planetary nebulae: a turtle swallowing a seashell. Giving this dying star such a weird name is less of a challenge than trying to figure out how dying stars create these unusual shapes. The larger image shows the entire nebula; the inset picture captures the complicated structure surrounding the dying star. The remarkable features of this nebula are the numerous holes in the inner shells with jets of material streaming from them. These jets produce column-shaped features that are mirrored in the opposite direction. The multiple shells of material ejected by the dying star give this planetary nebula its odd form. In the 'full nebula' image, the brighter central region looks like a 'nautilus shell'; the fainter outer structure (colored red) a 'tortoise.' The dying star is the white dot in the center. Both pictures are composite images based on observations taken Aug. 6, 1997 with the telescope's Wide Field and Planetary Camera 2. Material flung off by this central star is streaming out of holes it punched in the nautilus shell. At least four jets of material can be seen in the 'full nebula' image: a pair near 6 and 12 o'clock and another near 2 and 8 o'clock. In each pair, the jets are directly opposite each other, exemplifying their 'bipolar' nature. The jets are thought to be driven by a 'fast wind' - material propelled by radiation from the hot central star. In the inner 'nautilus' shell, bright rims outline the escape holes created by this 'wind,' such as the one at 2 o'clock. This same 'wind' appears to give rise to the prominent outer jet in the same direction. The hole in the inner shell acts like a hose nozzle, directing the flow of

  9. TURTLE IN SPACE DESCRIBES NEW HUBBLE IMAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    NASA's Hubble Space Telescope has shown us that the shrouds of gas surrounding dying, sunlike stars (called planetary nebulae) come in a variety of strange shapes, from an 'hourglass' to a 'butterfly' to a 'stingray.' With this image of NGC 6210, the Hubble telescope has added another bizarre form to the rogues' gallery of planetary nebulae: a turtle swallowing a seashell. Giving this dying star such a weird name is less of a challenge than trying to figure out how dying stars create these unusual shapes. The larger image shows the entire nebula; the inset picture captures the complicated structure surrounding the dying star. The remarkable features of this nebula are the numerous holes in the inner shells with jets of material streaming from them. These jets produce column-shaped features that are mirrored in the opposite direction. The multiple shells of material ejected by the dying star give this planetary nebula its odd form. In the 'full nebula' image, the brighter central region looks like a 'nautilus shell'; the fainter outer structure (colored red) a 'tortoise.' The dying star is the white dot in the center. Both pictures are composite images based on observations taken Aug. 6, 1997 with the telescope's Wide Field and Planetary Camera 2. Material flung off by this central star is streaming out of holes it punched in the nautilus shell. At least four jets of material can be seen in the 'full nebula' image: a pair near 6 and 12 o'clock and another near 2 and 8 o'clock. In each pair, the jets are directly opposite each other, exemplifying their 'bipolar' nature. The jets are thought to be driven by a 'fast wind' - material propelled by radiation from the hot central star. In the inner 'nautilus' shell, bright rims outline the escape holes created by this 'wind,' such as the one at 2 o'clock. This same 'wind' appears to give rise to the prominent outer jet in the same direction. The hole in the inner shell acts like a hose nozzle, directing the flow of

  10. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  11. Spatial-filter models to describe IC lithographic behavior

    NASA Astrophysics Data System (ADS)

    Stirniman, John P.; Rieger, Michael L.

    1997-07-01

    Proximity correction systems require an accurate, fast way to predict how a pattern configuration will transfer to the wafer. In this paper we present an efficient method for modeling the pattern transfer process based on Dennis Gabor's `theory of communication'. This method is based on a `convolution form' where any 2D transfer process can be modeled with a set of linear, 2D spatial filters, even when the transfer process is non-linear. We will show that this form is a general case from which other well-known process simulation models can be derived. Furthermore, we will demonstrate that the convolution form can be used to model observed phenomena, even when the physical mechanisms involved are unknown.

  12. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  18. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  19. Describing variations of the Fisher-matrix across parameter space

    NASA Astrophysics Data System (ADS)

    Schäfer, Björn Malte; Reischke, Robert

    2016-08-01

    Forecasts in cosmology, both with Monte Carlo Markov-chain methods and with the Fisher-matrix formalism, depend on the choice of the fiducial model because both the signal strength of any observable and the model non-linearities linking observables to cosmological parameters vary in the general case. In this paper we propose a method for extrapolating Fisher-forecasts across the space of cosmological parameters by constructing a suitable basis. We demonstrate the validity of our method with constraints on a standard dark energy model extrapolated from a ΛCDM-model, as can be expected from two-bin weak lensing tomography with an Euclid-like survey, in the parameter pairs (Ωm, σ8), (Ωm, w0) and (w0, wa). Our numerical results include very accurate extrapolations across a wide range of cosmological parameters in terms of shape, size and orientation of the parameter likelihood, and a decomposition of the change of the likelihood contours into modes, which are straightforward to interpret in a geometrical way. We find that in particular the variation of the dark energy figure of merit is well captured by our formalism.

  20. Toward more accurate loss tangent measurements in reentrant cavities

    SciTech Connect

    Moyer, R. D.

    1980-05-01

    Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.

  1. Can density cumulant functional theory describe static correlation effects?

    PubMed

    Mullinax, J Wayne; Sokolov, Alexander Yu; Schaefer, Henry F

    2015-06-01

    We evaluate the performance of density cumulant functional theory (DCT) for capturing static correlation effects. In particular, we examine systems with significant multideterminant character of the electronic wave function, such as the beryllium dimer, diatomic carbon, m-benzyne, 2,6-pyridyne, twisted ethylene, as well as the barrier for double-bond migration in cyclobutadiene. We compute molecular properties of these systems using the ODC-12 and DC-12 variants of DCT and compare these results to multireference configuration interaction and multireference coupled-cluster theories, as well as single-reference coupled-cluster theory with single, double (CCSD), and perturbative triple excitations [CCSD(T)]. For all systems the DCT methods show intermediate performance between that of CCSD and CCSD(T), with significant improvement over the former method. In particular, for the beryllium dimer, m-benzyne, and 2,6-pyridyne, the ODC-12 method along with CCSD(T) correctly predict the global minimum structures, while CCSD predictions fail qualitatively, underestimating the multireference effects. Our results suggest that the DC-12 and ODC-12 methods are capable of describing emerging static correlation effects but should be used cautiously when highly accurate results are required. Conveniently, the appearance of multireference effects in DCT can be diagnosed by analyzing the DCT natural orbital occupations, which are readily available at the end of the energy computation.

  2. Can density cumulant functional theory describe static correlation effects?

    PubMed

    Mullinax, J Wayne; Sokolov, Alexander Yu; Schaefer, Henry F

    2015-06-01

    We evaluate the performance of density cumulant functional theory (DCT) for capturing static correlation effects. In particular, we examine systems with significant multideterminant character of the electronic wave function, such as the beryllium dimer, diatomic carbon, m-benzyne, 2,6-pyridyne, twisted ethylene, as well as the barrier for double-bond migration in cyclobutadiene. We compute molecular properties of these systems using the ODC-12 and DC-12 variants of DCT and compare these results to multireference configuration interaction and multireference coupled-cluster theories, as well as single-reference coupled-cluster theory with single, double (CCSD), and perturbative triple excitations [CCSD(T)]. For all systems the DCT methods show intermediate performance between that of CCSD and CCSD(T), with significant improvement over the former method. In particular, for the beryllium dimer, m-benzyne, and 2,6-pyridyne, the ODC-12 method along with CCSD(T) correctly predict the global minimum structures, while CCSD predictions fail qualitatively, underestimating the multireference effects. Our results suggest that the DC-12 and ODC-12 methods are capable of describing emerging static correlation effects but should be used cautiously when highly accurate results are required. Conveniently, the appearance of multireference effects in DCT can be diagnosed by analyzing the DCT natural orbital occupations, which are readily available at the end of the energy computation. PMID:26575548

  3. Mucinous myoepithelioma, a recently described new myoepithelioma variant.

    PubMed

    Gnepp, Douglas R

    2013-07-01

    Myoepithelial neoplasms are tumors composed almost exclusively of cells with myoepithelial differentiation. They frequently contain spindle, plasmacytoid or epithelioid shaped cells and may have oncocytic or clear cytoplasmic features. They are uncommon, accounting for 1.5 % of all salivary gland tumors and for 2.2-5.7 % of major and minor salivary gland tumors, respectively. Recently this author, together with several colleagues, have described three unusual myoepithelial tumors, two benign and one malignant that contained abundant intracellular mucin material, which they termed the mucinous variant of myoepithelioma. This represents a unique, previously undescribed subtype that does not fit in the current classification system. A literature review revealed several similar cases reported as "signet ring-cell" adenocarcinomas of salivary gland, which stained for myoepithelial markers, in addition to containing intracellular mucin material, that are more accurately classified as mucinous myoepithelioma. To date, there are 17 reported mucinous myoepitheliomas; four were classified as benign and 13 as malignant. Thirteen arose in minor salivary glands and four in the parotid gland. One patient presented with a lymph node metastasis. With minimal follow-up currently available, this appears to be a benign to low-grade malignancy.

  4. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    SciTech Connect

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify the

  5. Design of aquifer remediation systems: (1) describing hydraulic structure and NAPL architecture using tracers.

    PubMed

    Enfield, Carl G; Wood, A Lynn; Espinoza, Felipe P; Brooks, Michael C; Annable, Michael; Rao, P S C

    2005-12-01

    Aquifer heterogeneity (structure) and NAPL distribution (architecture) are described based on tracer data. An inverse modelling approach that estimates the hydraulic structure and NAPL architecture based on a Lagrangian stochastic model where the hydraulic structure is described by one or more populations of lognormally distributed travel times and the NAPL architecture is selected from eight possible assumed distributions. Optimization of the model parameters for each tested realization is based on the minimization of the sum of the square residuals between the log of measured tracer data and model predictions for the same temporal observation. For a given NAPL architecture the error is reduced with each added population. Model selection was based on a fitness which penalized models for increasing complexity. The technique is demonstrated under a range of hydrologic and contaminant settings using data from three small field-scale tracer tests: the first implementation at an LNAPL site using a line-drive flow pattern, the second at a DNAPL site with an inverted five-spot flow pattern, and the third at the same DNAPL site using a vertical circulation flow pattern. The Lagrangian model was capable of accurately duplicating experimentally derived tracer breakthrough curves, with a correlation coefficient of 0.97 or better. Furthermore, the model estimate of the NAPL volume is similar to the estimates based on moment analysis of field data.

  6. Using graph theory to describe and model chromosome aberrations.

    PubMed

    Sachs, Rainer K; Arsuaga, Javier; Vázquez, Mariel; Hlatky, Lynn; Hahnfeldt, Philip

    2002-11-01

    A comprehensive description of chromosome aberrations is introduced that is suitable for all cytogenetic protocols (e.g. solid staining, banding, FISH, mFISH, SKY, bar coding) and for mathematical analyses. "Aberration multigraphs" systematically characterize and interrelate three basic aberration elements: (1) the initial configuration of chromosome breaks; (2) the exchange process, whose cycle structure helps to describe aberration complexity; and (3) the final configuration of rearranged chromosomes, which determines the observed pattern but may contain cryptic misrejoinings in addition. New aberration classification methods and a far-reaching generalization of mPAINT descriptors, applicable to any protocol, emerge. The difficult problem of trying to infer actual exchange processes from cytogenetically observed final patterns is analyzed using computer algorithms, adaptations of known theorems on cubic graphs, and some new graph-theoretical constructs. Results include the following: (1) For a painting protocol, unambiguously inferring the occurrence of a high-order cycle requires a corresponding number of different colors; (2) cycle structure can be computed by a simple trick directly from mPAINT descriptors if the initial configuration has no more than one break per homologue pair; and (3) higher-order cycles are more frequent than the obligate cycle structure specifies. Aberration multigraphs are a powerful new way to describe, classify and quantitatively analyze radiation-induced chromosome aberrations. They pinpoint (but do not eliminate) the problem that, with present cytogenetic techniques, one observed pattern corresponds to many possible initial configurations and exchange processes. PMID:12385633

  7. Accurate masses for dispersion-supported galaxies

    NASA Astrophysics Data System (ADS)

    Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.

    2010-08-01

    We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.

  8. Quantitative real-time PCR for rapid and accurate titration of recombinant baculovirus particles.

    PubMed

    Hitchman, Richard B; Siaterli, Evangelia A; Nixon, Clare P; King, Linda A

    2007-03-01

    We describe the use of quantitative PCR (QPCR) to titer recombinant baculoviruses. Custom primers and probe were designed to gp64 and used to calculate a standard curve of QPCR derived titers from dilutions of a previously titrated baculovirus stock. Each dilution was titrated by both plaque assay and QPCR, producing a consistent and reproducible inverse relationship between C(T) and plaque forming units per milliliter. No significant difference was observed between titers produced by QPCR and plaque assay for 12 recombinant viruses, confirming the validity of this technique as a rapid and accurate method of baculovirus titration.

  9. Simple modification to describe the soil water retention curve between saturation and oven dryness

    NASA Astrophysics Data System (ADS)

    Khlosi, Muhammed; Cornelis, Wim M.; Gabriels, Donald; Sin, Gürkan

    2006-11-01

    Prediction of water and vapor flow in porous media requires an accurate estimation of the soil water retention curve describing the relation between matric potential and the respective soil water content from saturation to oven dryness. In this study, we modified the Kosugi (1999) function to represent soil water retention at all matric potentials. This modification retains the form of the original Kosugi function in the wet range and transforms to an adsorption equation in the dry range. Following a systems identification approach, the extended function was tested against observed data taken from literature that cover the complete range of water contents from saturation to almost oven dryness with textures ranging from sand to silty clay. The uncertainty of parameter estimates (confidence intervals) as well as the correlation between parameters was studied. The predictive capability of the extended model was evaluated under two reduced sets of data that do not contain observations below a matric potential of -1500 and -100 kPa. This evaluation showed that the extended model successfully predicted the water content with acceptable uncertainty. These results add confidence into the proposed modification and suggest that it can be used to better predict the soil water retention curve, particularly under reduced data sets.

  10. Accurate determination of segmented X-ray detector geometry

    PubMed Central

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  11. Accurate determination of segmented X-ray detector geometry.

    PubMed

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments.

  12. Accurate determination of segmented X-ray detector geometry.

    PubMed

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  13. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  14. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  15. Ankle Kinematics Described By Means Of Stereophotogrammetry And Mathematical Modelling

    NASA Astrophysics Data System (ADS)

    Allard, Paul; Nagata, Susan D.; Duhaime, Morris; Labelle, Hubert; Murphy, Norman

    1986-07-01

    The ankle is a complex structure allowing foot mobility while providing stability. In an attempt to improve the knowledge of the kinematics of the ankle, an approach incorporating both experimental and analytical techniques was developed. Stereophotogrammetry combined with the Direct Linear Transformation (DLT) technique, was used to quantify the spatial displacements of the foot. Four motorized cameras were fixed on a baseboard 0.62 m from a support frame so as to obtain two stereopairs, one medial and one lateral. For a pair, the cameras were 0.52 m apart and maintained a convergent angle of 21.5°. The support frame was designed to fix the tibia while allowing foot motion. A device comprised of 76 markers, 38 of which were visible to each pair of cameras was used for the calibration. The spatial position of each marker was measured to a precision of 0.05 mm whereas their computed spatial position using the DLT technique was accurate to 0.4 mm. For the experiment, two embalmed cadaver legs and feet, amputated at midshank and of normal appearance were used. After a partial dissection, three pin markers were embedded into each of the medial and lateral sides of the talus permitting the calculation of its center of rotation. Each foot was photographed in 5 positions at 10° intervals, ranging from 30 ° of plantarflexion to 10° of dorsiflexion. An analytical model was developed to spatially describe the rotation of the foot about the ankle. The model calculates the plane of motion and the orientation of the axis of rotation relative to the sagittal, frontal and transverse planes. These were found respectively to be for foot one: 100°, 86°, 15° and for foot two: 91°, 69°, 21°.

  16. Memory conformity affects inaccurate memories more than accurate memories.

    PubMed

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  17. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  18. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  19. Specific Heat Anomalies in Solids Described by a Multilevel Model

    NASA Astrophysics Data System (ADS)

    Souza, Mariano de; Paupitz, Ricardo; Seridonio, Antonio; Lagos, Roberto E.

    2016-04-01

    In the field of condensed matter physics, specific heat measurements can be considered as a pivotal experimental technique for characterizing the fundamental excitations involved in a certain phase transition. Indeed, phase transitions involving spin (de Souza et al. Phys. B Condens. Matter 404, 494 (2009) and Manna et al. Phys. Rev. Lett. 104, 016403 (2010)), charge (Pregelj et al. Phys. Rev. B 82, 144438 (2010)), lattice (Jesche et al. Phys. Rev. B 81, 134525 (2010)) (phonons) and orbital degrees of freedom, the interplay between ferromagnetism and superconductivity (Jesche et al. Phys. Rev. B 86, 020501 (2012)), Schottky-like anomalies in doped compounds (Lagos et al. Phys. C Supercond. 309, 170 (1998)), electronic levels in finite correlated systems (Macedo and Lagos J. Magn. Magn. Mater. 226, 105 (2001)), among other features, can be captured by means of high-resolution calorimetry. Furthermore, the entropy change associated with a first-order phase transition, no matter its nature, can be directly obtained upon integrating the specific heat over T, i.e., C( T)/ T, in the temperature range of interest. Here, we report on a detailed analysis of the two-peak specific heat anomalies observed in several materials. Employing a simple multilevel model, varying the spacing between the energy levels Δ i = ( E i - E 0) and the degeneracy of each energy level g i , we derive the required conditions for the appearance of such anomalies. Our findings indicate that a ratio of {Δ }2/{Δ }1thickapprox 10 between the energy levels and a high degeneracy of one of the energy levels define the two-peaks regime in the specific heat. Our approach accurately matches recent experimental results. Furthermore, using a mean-field approach, we calculate the specific heat of a degenerate Schottky-like system undergoing a ferromagnetic (FM) phase transition. Our results reveal that as the degeneracy is increased the Schottky maximum in the specific heat becomes narrow while the peak

  20. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  1. Judgements about the relation between force and trajectory variables in verbally described ballistic projectile motion.

    PubMed

    White, Peter A

    2013-01-01

    How accurate are explicit judgements about familiar forms of object motion, and how are they made? Participants judged the relations between force exerted in kicking a soccer ball and variables that define the trajectory of the ball: launch angle, maximum height attained, and maximum distance reached. Judgements tended to conform to a simple heuristic that judged force tends to increase as maximum height and maximum distance increase, with launch angle not being influential. Support was also found for the converse prediction, that judged maximum height and distance tend to increase as the amount of force described in the kick increases. The observed judgemental tendencies did not resemble the objective relations, in which force is a function of interactions between the trajectory variables. This adds to a body of research indicating that practical knowledge based on experiences of actions on objects is not available to the processes that generate judgements in higher cognition and that such judgements are generated by simple rules that do not capture the objective interactions between the physical variables.

  2. Judgements about the relation between force and trajectory variables in verbally described ballistic projectile motion.

    PubMed

    White, Peter A

    2013-01-01

    How accurate are explicit judgements about familiar forms of object motion, and how are they made? Participants judged the relations between force exerted in kicking a soccer ball and variables that define the trajectory of the ball: launch angle, maximum height attained, and maximum distance reached. Judgements tended to conform to a simple heuristic that judged force tends to increase as maximum height and maximum distance increase, with launch angle not being influential. Support was also found for the converse prediction, that judged maximum height and distance tend to increase as the amount of force described in the kick increases. The observed judgemental tendencies did not resemble the objective relations, in which force is a function of interactions between the trajectory variables. This adds to a body of research indicating that practical knowledge based on experiences of actions on objects is not available to the processes that generate judgements in higher cognition and that such judgements are generated by simple rules that do not capture the objective interactions between the physical variables. PMID:23075337

  3. An alternating renewal process describes the buildup of perceptual segregation

    PubMed Central

    Steele, Sara A.; Tranchina, Daniel; Rinzel, John

    2015-01-01

    For some ambiguous scenes perceptual conflict arises between integration and segregation. Initially, all stimulus features seem integrated. Then abruptly, perhaps after a few seconds, a segregated percept emerges. For example, segregation of acoustic features into streams may require several seconds. In behavioral experiments, when a subject's reports of stream segregation are averaged over repeated trials, one obtains a buildup function, a smooth time course for segregation probability. The buildup function has been said to reflect an underlying mechanism of evidence accumulation or adaptation. During long duration stimuli perception may alternate between integration and segregation. We present a statistical model based on an alternating renewal process (ARP) that generates buildup functions without an accumulative process. In our model, perception alternates during a trial between different groupings, as in perceptual bistability, with random and independent dominance durations sampled from different percept-specific probability distributions. Using this theory, we describe the short-term dynamics of buildup observed on short trials in terms of the long-term statistics of percept durations for the two alternating perceptual organizations. Our statistical-dynamics model describes well the buildup functions and alternations in simulations of pseudo-mechanistic neuronal network models with percept-selective populations competing through mutual inhibition. Even though the competition model can show history dependence through slow adaptation, our statistical switching model, that neglects history, predicts well the buildup function. We propose that accumulation is not a necessary feature to produce buildup. Generally, if alternations between two states exhibit independent durations with stationary statistics then the associated buildup function can be described by the statistical dynamics of an ARP. PMID:25620927

  4. Asphere, O asphere, how shall we describe thee?

    NASA Astrophysics Data System (ADS)

    Forbes, G. W.; Brophy, C. P.

    2008-09-01

    Two key criteria govern the characterization of nominal shapes for aspheric optical surfaces. An efficient representation describes the spectrum of relevant shapes to the required accuracy by using the fewest decimal digits in the associated coefficients. Also, a representation is more effective if it can, in some way, facilitate other processes - such as optical design, tolerancing, or direct human interpretation. With the development of better tools for their design, metrology, and fabrication, aspheric optics are becoming ever more pervasive. As part of this trend, aspheric departures of up to a thousand microns or more must be characterized at almost nanometre precision. For all but the simplest of shapes, this is not as easy as it might sound. Efficiency is therefore increasingly important. Further, metrology tools continue to be one of the weaker links in the cost-effective production of aspheric optics. Interferometry particularly struggles to deal with steep slopes in aspheric departure. Such observations motivated the ideas described in what follows for modifying the conventional description of rotationally symmetric aspheres to use orthogonal bases that boost efficiency. The new representations can facilitate surface tolerancing as well as the design of aspheres with cost-effective metrology options. These ideas enable the description of aspheric shapes in terms of decompositions that not only deliver improved efficiency and effectiveness, but that are also shown to admit direct interpretations. While it's neither poetry nor a cure-all, an old blight can be relieved.

  5. Quantization method for describing the motion of celestial systems

    NASA Astrophysics Data System (ADS)

    Christianto, Victor; Smarandache, Florentin

    2015-11-01

    Criticism arises concerning the use of quantization method for describing the motion of celestial systems, arguing that the method is oversimplifying the problem, and cannot explain other phenomena, for instance planetary migration. Using quantization method like Nottale-Schumacher did, one can expect to predict new exoplanets with remarkable result. The ``conventional'' theories explaining planetary migration normally use fluid theory involving diffusion process. Gibson have shown that these migration phenomena could be described via Navier-Stokes approach. Kiehn's argument was based on exact-mapping between Schrodinger equation and Navier-Stokes equations, while our method may be interpreted as an oversimplification of the real planetary migration process which took place sometime in the past, providing useful tool for prediction (e.g. other planetoids, which are likely to be observed in the near future, around 113.8AU and 137.7 AU). Therefore, quantization method could be seen as merely a ``plausible'' theory. We would like to emphasize that the quantization method does not have to be the true description of reality with regards to celestial phenomena. This method could explain some phenomena, while perhaps lacks explanation for other phenomena.

  6. Expansion of functions describing planetary surface and gravity field

    NASA Astrophysics Data System (ADS)

    Valeyev, S. G.

    1985-02-01

    The problem of description of the surface and gravity field of planets is examined using an expansion in spherical and other functions with particular consideration of the problem of expansion of lunar relief in spherical functions. The factors exerting an influence on approximating expressions can be divided into two groups. The first group includes errors generated by observational errors. Errors in the second group, generated by the mathematical description itself are stressed here. The approach used in solving the problem is statistical (regression) modeling. This approach is applied in an expansion of a function describing averaged surface relief by a number of spherical harmonics. The numerical example presented shows that the use of regression modeling makes it possible to obtain expansions with a number of terms approximately half as great as in the ordinary approach with the same or a higher descriptive accuracy. Also examined are the problems caused by the great dimensionality of the problems and the diversity of variants of initial data. The described approach gives adequate but economical models of relief and the gravity field.

  7. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  8. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    NASA Technical Reports Server (NTRS)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  9. Radiometrically accurate thermal imaging in the Landsat program

    NASA Astrophysics Data System (ADS)

    Lansing, Jack C., Jr.

    1988-01-01

    Methods of calibrating Landsat TM thermal IR data have been developed so that the residual error is reduced to 0.9 K (1 standard deviation). Methods for verifying the radiometric performance of TM on orbit and ground calibration methods are discussed. The preliminary design of the enhanced TM for Landsat-6 is considered. A technique for accurately reducing raw data from the Landsat-5 thermal band is described in detail.

  10. Observing Insects.

    ERIC Educational Resources Information Center

    Arbel, Ilil

    1991-01-01

    Describes how to observe and study the fascinating world of insects in public parks, backyards, and gardens. Discusses the activities and habits of several common insects. Includes addresses for sources of beneficial insects, seeds, and plants. (nine references) (JJK)

  11. Describing interactions in dystocia scores with a threshold model.

    PubMed

    Quaas, R L; Zhao, Y; Pollak, E J

    1988-02-01

    Field data on calving difficulty scores provided by the American Simmental Association were subjected to two methods of analysis: ordinary least-squares analysis and maximum likelihood with an assumed threshold model. In each analysis, the model included the interaction of sex of calf X age of dam. This interaction was readily apparent in the data (observed scale): within the youngest dams 58% of the heifer calves and 37% of the bull calves were born unassisted vs 96% and 92%, respectively, in the oldest dams. The objective was to determine if this interaction would be greatly reduced or would disappear on the underlying scale of a threshold model. The least-squares estimate of the sex difference was greatest within the youngest age-of-dam group (18 to 24 mo) and steadily declined with increasing age of dam, approaching zero for dams 6 yr and older. In contrast, the estimates of the sex difference from the threshold analysis were remarkably similar across ages of dam. It was concluded that observed interactions in calving ease data could be adequately described by a threshold model in which the effects of age of dam and sex of calf act additively on the underlying variable.

  12. Accurate energy levels for singly ionized platinum (Pt II)

    NASA Technical Reports Server (NTRS)

    Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Engleman, Rolf, Jr.

    1988-01-01

    New observations of the spectrum of Pt II have been made with hollow-cathode lamps. The region from 1032 to 4101 A was observed photographically with a 10.7-m normal-incidence spectrograph. The region from 2245 to 5223 A was observed with a Fourier-transform spectrometer. Wavelength measurements were made for 558 lines. The uncertainties vary from 0.0005 to 0.004 A. From these measurements and three parity-forbidden transitions in the infrared, accurate values were determined for 28 even and 72 odd energy levels of Pt II.

  13. On the importance of having accurate data for astrophysical modelling

    NASA Astrophysics Data System (ADS)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  14. On the Accurate Prediction of CME Arrival At the Earth

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Hess, Phillip

    2016-07-01

    We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.

  15. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  16. An accurate geometric distance to the compact binary SS Cygni vindicates accretion disc theory.

    PubMed

    Miller-Jones, J C A; Sivakoff, G R; Knigge, C; Körding, E G; Templeton, M; Waagen, E O

    2013-05-24

    Dwarf novae are white dwarfs accreting matter from a nearby red dwarf companion. Their regular outbursts are explained by a thermal-viscous instability in the accretion disc, described by the disc instability model that has since been successfully extended to other accreting systems. However, the prototypical dwarf nova, SS Cygni, presents a major challenge to our understanding of accretion disc theory. At the distance of 159 ± 12 parsecs measured by the Hubble Space Telescope, it is too luminous to be undergoing the observed regular outbursts. Using very long baseline interferometric radio observations, we report an accurate, model-independent distance to SS Cygni that places the source substantially closer at 114 ± 2 parsecs. This reconciles the source behavior with our understanding of accretion disc theory in accreting compact objects.

  17. An accurate geometric distance to the compact binary SS Cygni vindicates accretion disc theory.

    PubMed

    Miller-Jones, J C A; Sivakoff, G R; Knigge, C; Körding, E G; Templeton, M; Waagen, E O

    2013-05-24

    Dwarf novae are white dwarfs accreting matter from a nearby red dwarf companion. Their regular outbursts are explained by a thermal-viscous instability in the accretion disc, described by the disc instability model that has since been successfully extended to other accreting systems. However, the prototypical dwarf nova, SS Cygni, presents a major challenge to our understanding of accretion disc theory. At the distance of 159 ± 12 parsecs measured by the Hubble Space Telescope, it is too luminous to be undergoing the observed regular outbursts. Using very long baseline interferometric radio observations, we report an accurate, model-independent distance to SS Cygni that places the source substantially closer at 114 ± 2 parsecs. This reconciles the source behavior with our understanding of accretion disc theory in accreting compact objects. PMID:23704566

  18. An articulated statistical shape model for accurate hip joint segmentation.

    PubMed

    Kainmueller, Dagmar; Lamecker, Hans; Zachow, Stefan; Hege, Hans-Christian

    2009-01-01

    In this paper we propose a framework for fully automatic, robust and accurate segmentation of the human pelvis and proximal femur in CT data. We propose a composite statistical shape model of femur and pelvis with a flexible hip joint, for which we extend the common definition of statistical shape models as well as the common strategy for their adaptation. We do not analyze the joint flexibility statistically, but model it explicitly by rotational parameters describing the bent in a ball-and-socket joint. A leave-one-out evaluation on 50 CT volumes shows that image driven adaptation of our composite shape model robustly produces accurate segmentations of both proximal femur and pelvis. As a second contribution, we evaluate a fine grain multi-object segmentation method based on graph optimization. It relies on accurate initializations of femur and pelvis, which our composite shape model can generate. Simultaneous optimization of both femur and pelvis yields more accurate results than separate optimizations of each structure. Shape model adaptation and graph based optimization are embedded in a fully automatic framework. PMID:19964159

  19. Observation Station

    ERIC Educational Resources Information Center

    Rutherford, Heather

    2011-01-01

    This article describes how a teacher integrates science observations into the writing center. At the observation station, students explore new items with a science theme and use their notes and questions for class writings every day. Students are exposed to a variety of different topics and motivated to write in different styles all while…

  20. A six-parameter space to describe galaxy diversification

    NASA Astrophysics Data System (ADS)

    Fraix-Burnet, D.; Chattopadhyay, T.; Chattopadhyay, A. K.; Davoust, E.; Thuillard, M.

    2012-09-01

    Context. The diversification of galaxies is caused by transforming events such as accretion, interaction, or mergers. These explain the formation and evolution of galaxies, which can now be described by many observables. Multivariate analyses are the obvious tools to tackle the available datasets and understand the differences between different kinds of objects. However, depending on the method used, redundancies, incompatibilities, or subjective choices of the parameters can diminish the usefulness of these analyses. The behaviour of the available parameters should be analysed before any objective reduction in the dimensionality and any subsequent clustering analyses can be undertaken, especially in an evolutionary context. Aims: We study a sample of 424 early-type galaxies described by 25 parameters, 10 of which are Lick indices, to identify the most discriminant parameters and construct an evolutionary classification of these objects. Methods: Four independent statistical methods are used to investigate the discriminant properties of the observables and the partitioning of the 424 galaxies: principal component analysis, K-means cluster analysis, minimum contradiction analysis, and Cladistics. Results: The methods agree in terms of six parameters: central velocity dispersion, disc-to-bulge ratio, effective surface brightness, metallicity, and the line indices NaD and OIII. The partitioning found using these six parameters, when projected onto the fundamental plane, looks very similar to the partitioning obtained previously for a totally different sample and based only on the parameters of the fundamental plane. Two additional groups are identified here, and we are able to provide some more constraints on the assembly history of galaxies within each group thanks to the larger number of parameters. We also identify another "fundamental plane" with the absolute K magnitude, the linear diameter, and the Lick index Hβ. We confirm that the Mg b vs. velocity dispersion

  1. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  2. Inference of random walk models to describe leukocyte migration

    NASA Astrophysics Data System (ADS)

    Jones, Phoebe J. M.; Sim, Aaron; Taylor, Harriet B.; Bugeon, Laurence; Dallman, Magaret J.; Pereira, Bernard; Stumpf, Michael P. H.; Liepe, Juliane

    2015-12-01

    While the majority of cells in an organism are static and remain relatively immobile in their tissue, migrating cells occur commonly during developmental processes and are crucial for a functioning immune response. The mode of migration has been described in terms of various types of random walks. To understand the details of the migratory behaviour we rely on mathematical models and their calibration to experimental data. Here we propose an approximate Bayesian inference scheme to calibrate a class of random walk models characterized by a specific, parametric particle re-orientation mechanism to observed trajectory data. We elaborate the concept of transition matrices (TMs) to detect random walk patterns and determine a statistic to quantify these TM to make them applicable for inference schemes. We apply the developed pipeline to in vivo trajectory data of macrophages and neutrophils, extracted from zebrafish that had undergone tail transection. We find that macrophage and neutrophils exhibit very distinct biased persistent random walk patterns, where the strengths of the persistence and bias are spatio-temporally regulated. Furthermore, the movement of macrophages is far less persistent than that of neutrophils in response to wounding.

  3. Male powerlifting performance described from the viewpoint of complex systems.

    PubMed

    García-Manso, J M; Martín-González, J M; Da Silva-Grigoletto, M E; Vaamonde, D; Benito, P; Calderón, J

    2008-04-01

    This paper reflects on the factors that condition performance in powerlifting and proposes that the result-generating process is inadequately described by the allometric equations commonly used. We analysed the scores of 1812 lifters belonging to all body mass categories, and analysed the changes in the results achieved in each weight category and by each competitor. Current performance-predicting methods take into account biological variables, paying no heed to other competition features. Performance in male powerlifting (as in other strength sports) behaves as a self-organised system with non-linear interactions between its components. Thus, multiple internal and external elements must condition changes in a competitor's score, the most important being body mass, body size, the number of practitioners, and the concurrency of favourable factors in one individual. It was observed that each behaved in a specific form in the high level, according to the individuals' circumstances, which make up the main elements of the competitive system in every category. In powerlifting, official weight categories are generally organised in three different groups: light (<52.0 to <60 kg), medium (<67.5 to <90.0 kg) and heavy (<100 to >125 kg) lifter categories, each one of them with specific allometric exponents. The exponent should be revised periodically, especially with regard to the internal dynamics of the category, and adjusted according to possible changes affecting competition.

  4. The first described joint-associated intraneural ganglion cyst.

    PubMed

    Spinner, Robert J; Wang, Huan

    2011-12-01

    This article describes the identification of the first known specimen in which an articular origin for an intraneural cyst was recognized. Prompted by early citations in the 20th century of a valuable 1904 tibial intraneural ganglion housed at St. Bartholomew's Hospital in London, we traveled there to research it. We fortuitously discovered a citation to an earlier joint-related specimen, one that had not previously been referenced correctly in subsequent publications on intraneural cysts for more than a century. The original anatomic description dating to 1884, summarized in 3 lines in a museum catalog, was attributed to T. Swinford Edwards. This cadaveric specimen affected the deep branch of the ulnar nerve and arose from a carpal joint. Additional information was provided in a Transactions in 1884. An original drawing of the specimen was published in a textbook written in 1889 by Anthony Bowlby, a former curator, both of which credited F. (Frederick) Swinford Edwards, a demonstrator in anatomy and surgery at St. Bartholomew's. Unfortunately, the specimen could not be located and is presumed lost. To establish this specimen as the first known example of a joint-related intraneural cyst, we completed a review of >400 other cases and confirmed this statement. The first observation of an articular origin for an intraneural cyst, made by 2 eminent surgeons, has not been properly acknowledged. Considered with a modern perspective, this historical case solidifies the articular (synovial) origin for these unusual intraneural cysts, a finding that has important treatment implications.

  5. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  6. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  7. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  8. Accurate nuclear radii and binding energies from a chiral interaction

    SciTech Connect

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  9. Accurate nuclear radii and binding energies from a chiral interaction

    DOE PAGES

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shellmore » nuclei are in reasonable agreement with experiment.« less

  10. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  11. Representing the observer in electro-optical target acquisition models.

    PubMed

    Vollmerhausen, Richard H

    2009-09-28

    Electro-optical target acquisition models predict the probability that a human observer recognizes or identifies a target. To accurately model targeting performance, the impact of imager blur and noise on human vision must be quantified. In the most widely used target acquisition models, human vision is treated as a "black box" that is characterized by its signal transfer response and detection thresholds. This paper describes an engineering model of observer vision. Characteristics of the observer model are compared to psychophysical data. This paper also describes how to integrate the observer model into both reflected light and thermal sensor models. PMID:19907512

  12. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  13. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  14. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  15. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  16. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  17. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  18. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  19. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  20. A Tensorial Connectivity–Tortuosity Concept to Describe the Unsaturated Hydraulic Properties of Anisotropic Soils

    SciTech Connect

    Zhang, Z. F.; Ward, Anderson L.; Gee, Glendon W.

    2003-08-15

    Natural soils are often anisotropic and the anisotropy in unsaturated hydraulic conductivity is saturation-dependent. A tensorial connectivity-tortuosity (TCT) concept was proposed to describe the unsaturated soil hydraulic property. The TCT concept states that soil pore connectivity and/or tortuosity are anisotropic and can be described using a tensor. The anisotropic hydraulic properties can then be described by extending the existing hydraulic functions, e.g., the Burdine (1953) and the Mualem (1976) models in such a way that the connectivity-tortuosity coefficient (L) is a tensor. The TCT concept was tested using synthetic Miller-similar soils with four levels of heterogeneity and four levels of anisotropy. The results show that the soil water retention curves were independent of soil anisotropy but dependent on soil heterogeneity. The TCT model can accurately describe the unsaturated hydraulic functions of anisotropic soils. The value of L is a function of both soil heterogeneity and anisotropy.

  1. Towards a Density Functional Theory Exchange-Correlation Functional able to describe localization/delocalization

    NASA Astrophysics Data System (ADS)

    Mattsson, Ann E.; Wills, John M.

    2013-03-01

    The inability to computationally describe the physics governing the properties of actinides and their alloys is the poster child of failure of existing Density Functional Theory exchange-correlation functionals. The intricate competition between localization and delocalization of the electrons, present in these materials, exposes the limitations of functionals only designed to properly describe one or the other situation. We will discuss the manifestation of this competition in real materials and propositions on how to construct a functional able to accurately describe properties of these materials. I addition we will discuss both the importance of using the Dirac equation to describe the relativistic effects in these materials, and the connection to the physics of transition metal oxides. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  3. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  4. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  5. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  6. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  7. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  8. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  9. New Techniques and Metrics for Describing Rivers Using High Resolution Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Bailey, P.; McKean, J. A.; Poulsen, F.; Ochoski, N.; Wheaton, J. M.

    2013-12-01

    Techniques for collecting high resolution digital elevation models (DEMs) of fluvial environments are cheaper and more widely accessible than ever before. These DEMs improve over traditional transect-based approaches because they represent the channel bed as a continuous surface. Advantages beyond the obvious more accurate representations of channel area and volume include the three dimensional representation of geomorphic features that directly influence the behavior of river organisms. It is possible to identify many of these habitats using topography alone, but when combined with the spatial arrangement of these areas within the channel, a more holistic view of biologic existence can be gleaned from the three dimensional representation of the channel. We present a new approach for measuring and describing channels that leverages the continuous nature of digital elevation model surfaces. Delivered via the River Bathymetry Toolkit (RBT) this approach is capable of not only reproducing the traditional transect-based metrics, but also includes novel techniques for generating stage independent channel measurements, regardless of the flow that occurred at the time of data capture. The RBT also possesses the capability of measuring changes over time, accounting for uncertainty using approaches adopted from the Geomorphic Change Detection (GCD) literature and producing maps and metrics for erosion and deposition. This new approach is available via the River Bathymetry Toolit that is structured to enable repeat systematic measurements over an unlimited number of sites. We present how this approach has been applied to over 500 sites in the Pacific Northwest as part of the Columbia Habitat Mapping Program (CHaMP). We demonstrate the new channel metrics for a range of these sites, both at the observed and simulated flows as well as examples of changes in channel morphology over time. We present an analysis comparing these new metrics against traditional transect based

  10. Oral Reading Observation System Observer's Training Manual.

    ERIC Educational Resources Information Center

    Brady, Mary Ella; And Others

    A self-instructional program for use by teachers of the handicapped, this training manual was developed to teach accurate coding with the Oral Reading Observation System (OROS)an observation system designed to code teacher-pupil verbal interaction during oral reading instruction. The body of the manual is organized to correspond to the nine…

  11. DETAIL OF PLAQUE DESCRIBING LION SCULPTURES BY ROLAND HINTON PERRY, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL OF PLAQUE DESCRIBING LION SCULPTURES BY ROLAND HINTON PERRY, NORTHWEST ABUTMENT - Connecticut Avenue Bridge, Spans Rock Creek & Potomac Parkway at Connecticut Avenue, Washington, District of Columbia, DC

  12. Library preparation for highly accurate population sequencing of RNA viruses

    PubMed Central

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  13. Calibrating X-ray Imaging Devices for Accurate Intensity Measurement

    SciTech Connect

    Haugh, M. J.

    2011-07-28

    The purpose of the project presented is to develop methods to accurately calibrate X-ray imaging devices. The approach was to develop X-ray source systems suitable for this endeavor and to develop methods to calibrate solid state detectors to measure source intensity. NSTec X-ray sources used for the absolute calibration of cameras are described, as well as the method of calibrating the source by calibrating the detectors. The work resulted in calibration measurements for several types of X-ray cameras. X-ray camera calibration measured efficiency and efficiency variation over the CCD. Camera types calibrated include: CCD, CID, back thinned (back illuminated), front illuminated.

  14. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  15. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  16. Fractionating Polymer Microspheres as Highly Accurate Density Standards.

    PubMed

    Bloxham, William H; Hennek, Jonathan W; Kumar, Ashok A; Whitesides, George M

    2015-07-21

    This paper describes a method of isolating small, highly accurate density-standard beads and characterizing their densities using accurate and experimentally traceable techniques. Density standards have a variety of applications, including the characterization of density gradients, which are used to separate objects in a variety of fields. Glass density-standard beads can be very accurate (±0.0001 g cm(-3)) but are too large (3-7 mm in diameter) for many applications. When smaller density standards are needed, commercial polymer microspheres are often used. These microspheres have standard deviations in density ranging from 0.006 to 0.021 g cm(-3); these distributions in density make these microspheres impractical for applications demanding small steps in density. In this paper, commercial microspheres are fractionated using aqueous multiphase systems (AMPS), aqueous mixture of polymers and salts that spontaneously separate into phases having molecularly sharp steps in density, to isolate microspheres having much narrower distributions in density (standard deviations from 0.0003 to 0.0008 g cm(-3)) than the original microspheres. By reducing the heterogeneity in densities, this method reduces the uncertainty in the density of any specific bead and, therefore, improves the accuracy within the limits of the calibration standards used to characterize the distributions in density.

  17. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  18. Accurate and Reliable Gait Cycle Detection in Parkinson's Disease.

    PubMed

    Hundza, Sandra R; Hook, William R; Harris, Christopher R; Mahajan, Sunny V; Leslie, Paul A; Spani, Carl A; Spalteholz, Leonhard G; Birch, Benjamin J; Commandeur, Drew T; Livingston, Nigel J

    2014-01-01

    There is a growing interest in the use of Inertial Measurement Unit (IMU)-based systems that employ gyroscopes for gait analysis. We describe an improved IMU-based gait analysis processing method that uses gyroscope angular rate reversal to identify the start of each gait cycle during walking. In validation tests with six subjects with Parkinson disease (PD), including those with severe shuffling gait patterns, and seven controls, the probability of True-Positive event detection and False-Positive event detection was 100% and 0%, respectively. Stride time validation tests using high-speed cameras yielded a standard deviation of 6.6 ms for controls and 11.8 ms for those with PD. These data demonstrate that the use of our angular rate reversal algorithm leads to improvements over previous gyroscope-based gait analysis systems. Highly accurate and reliable stride time measurements enabled us to detect subtle changes in stride time variability following a Parkinson's exercise class. We found unacceptable measurement accuracy for stride length when using the Aminian et al gyro-based biomechanical algorithm, with errors as high as 30% in PD subjects. An alternative method, using synchronized infrared timing gates to measure velocity, combined with accurate mean stride time from our angular rate reversal algorithm, more accurately calculates mean stride length.

  19. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  20. Accurate MTF measurement in digital radiography using noise response

    PubMed Central

    Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel R.; Hoffmann, Kenneth R.; Rudin, Stephen

    2010-01-01

    Purpose: The authors describe a new technique to determine the system presampled modulation transfer function (MTF) in digital radiography using only the detector noise response. Methods: A cascaded-linear systems analysis was used to develop an exact relationship between the two-dimensional noise power spectrum (NPS) and the presampled MTF for a generalized detector system. This relationship was then utilized to determine the two-dimensional presampled MTF. For simplicity, aliasing of the correlated noise component of the NPS was assumed to be negligible. Accuracy of this method was investigated using simulated images from a simple detector model in which the “true” MTF was known exactly. Measurements were also performed on three detector technologies (an x-ray image intensifier, an indirect flat panel detector, and a solid state x-ray image intensifier), and the results were compared using the standard edge-response method. Flat-field and edge images were acquired and analyzed according to guidelines set forth by the International Electrotechnical Commission, using the RQA 5 spectrum. Results: The presampled MTF determined using the noise-response method for the simulated detector system was in close agreement with the true MTF with an averaged percent difference of 0.3% and a maximum difference of 1.1% observed at the Nyquist frequency (fN). The edge-response method of the simulated detector system also showed very good agreement at lower spatial frequencies (less than 0.5 fN) with an averaged percent difference of 1.6% but showed significant discrepancies at higher spatial frequencies (greater than 0.5 fN) with an averaged percent difference of 17%. Discrepancies were in part a result of noise in the edge image and phasing errors. For all three detector systems, the MTFs obtained using the two methods were found to be in good agreement at spatial frequencies less than 0.5 fN with an averaged percent difference of 3.4%. Above 0.5 fN, differences increased to

  1. Describing Willow Flycatcher habitats: scale perspectives and gender differences

    USGS Publications Warehouse

    Sedgwick, James A.; Knopf, Fritz L.

    1992-01-01

    We compared habitat characteristics of nest sites (female-selected sites) and song perch sites (male-selected sites) with those of sites unused by Willow Flycatchers (Empidonax traillii) at three different scales of vegetation measurement: (1) microplot (central willow [Salix spp.] bush and four adjacent bushes); (2) mesoplot (0.07 ha); and, (3) macroplot (flycatcher territory size). Willow Flycatchers exhibited vegetation preferences at all three scales. Nest sites were distinguished by high willow density and low variability in willow patch size and bush height. Song perch sites were characterized by large central shrubs, low central shrub vigor, and high variability in shrub size. Unused sites were characterized by greater distances between willows and willow patches, less willow coverage, and a smaller riparian zone width than either nest or song perch sites. At all scales, nest sites were situated farther from unused sites in multivariate habitat space than were song perch sites, suggesting (1) a correspondence among scales in their ability to describe Willow Flycatcher habitat, and (2) females are more discriminating in habitat selection than males. Microhabitat differences between male-selected (song perch) and female-selected (nest) sites were evident at the two smaller scales; at the finest scale, the segregation in habitat space between male-selected and female-selected sites was greater than that between male-selected and unused sites. Differences between song perch and nest sites were not apparent at the scale of flycatcher territory size, possibly due to inclusion of (1) both nest and song perch sites, (2) defended, but unused habitat, and/or (3) habitat outside of the territory, in larger scale analyses. The differences between nest and song perch sites at the finer scales reflect their different functions (e.g., nest concealment and microclimatic requirements vs. advertising and territorial defense, respectively), and suggest that the exclusive use

  2. Accurate vessel segmentation with constrained B-snake.

    PubMed

    Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi

    2015-08-01

    We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.

  3. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  4. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  5. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  6. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  7. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  8. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  9. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  10. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  11. Interculture: Some Concepts for Describing the Situation of Immigrants.

    ERIC Educational Resources Information Center

    Ekstrand, Lars Henric; And Others

    1981-01-01

    Attempts to find new ways of describing and analyzing dynamic interactions in country of origin, host country, and immigrant community caused by migration. Analyzes linguistic models, concept of culture, emigration psychology, and identity formation. (Author/BK)

  12. Zulma Ageitos de Castellanos: Publications and status of described taxa.

    PubMed

    Signorelli, Javier H; Urteaga, Diego; Teso, Valeria

    2015-10-28

    Zulma Ageitos de Castellanos was an Argentinian malacologist working in the "Facultad de Ciencias Naturales y Museo" at La Plata University where she taught invertebrate zoology between 1947 and 1990. Her scientific publications are listed in chronological order. Described genus-group and species-group taxa are listed. Information about the type locality and type material, and taxonomic remarks are also provided. Finally, type material of all described taxa was requested and, when located, illustrated.

  13. Describing behavior with ratios of count and time

    PubMed Central

    Johnston, J. M.; Hodge, Clyde W.

    1989-01-01

    Describing behavior with ratios of count and time is a popular measurement tactic in the field of behavior analysis. The paper examines some count and time ratios in order to determine what about behavior each describes and why one ratio may sometimes be more useful than another. In addition, the paper briefly considers some terminological issues, derived quantities, dimensional analysis, some advantages and disadvantages of ratios, and selection of useful quantities for measurement. PMID:22478031

  14. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  15. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  16. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  17. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  18. Enhanced ocean observational capability

    SciTech Connect

    Volpe, A M; Esser, B K

    2000-01-10

    Coastal oceans are vital to world health and sustenance. Technology that enables new observations has always been the driver of discovery in ocean sciences. In this context, we describe the first at sea deployment and operation of an inductively coupled plasma mass spectrometer (ICPMS) for continuous measurement of trace elements in seawater. The purpose of these experiments was to demonstrate that an ICPMS could be operated in a corrosive and high vibration environment with no degradation in performance. Significant advances occurred this past year due to ship time provided by Scripps Institution of Oceanography (UCSD), as well as that funded through this project. Evaluation at sea involved performance testing and characterization of several real-time seawater analysis modes. We show that mass spectrometers can rapidly, precisely and accurately determine ultratrace metal concentrations in seawater, thus allowing high-resolution mapping of large areas of surface seawater. This analytical capability represents a significant advance toward real-time observation and understanding of water mass chemistry in dynamic coastal environments. In addition, a joint LLNL-SIO workshop was convened to define and design new technologies for ocean observation. Finally, collaborative efforts were initiated with atmospheric scientists at LLNL to identify realistic coastal ocean and river simulation models to support real-time analysis and modeling of hazardous material releases in coastal waterways.

  19. An universal and accurate replica technique for scanning electron microscope study in clinical dentistry.

    PubMed

    Lambrechts, P; Vanherle, G; Davidson, C

    1981-09-01

    One of the main concerns of dental research is the observation of the oral tissues and the materials applied to the dentition. The changes in composition and structure of the outer surfaces and the materials deposited on these surfaces are of special interest. In the literature, a variety of replica techniques for these purposes is described (Grundy in 1971 [12]; Saxton in 1973 [25]). The use of these techniques is limited because of artifacts in the samples, and a restricted resolution power resulting from useful magnifications in the order of 800x. An accurate and universal replica technique for the examination of specimens to be viewed under the SEM has been developed. The first impression is made by a light body silicone elastomer (President Coltene). The positive replica is made by electrodeposition of copper in an electro plating bath (Acru plat 5 electronic, Dr. Th. Wieland, D-7530 Pforzheim). The reliability and accuracy of this replica technique was verified by a scanning electron microscopic comparison of the replicas and the actual structures of etched enamel. To illustrate the applicability of the replica technique to structures with much lower hardness, also high resolution images of dental plaque were produced. The copper surface offers a perfect, original and proper electroconductive medium that withstands the bombardment of electrons and the relatively severe conditions in the scanning electron microscope. Reproducibility was accurate as judged by the duplication in position, size, and shape of the fine detail at magnifications of 7500x offering a resolution of 25 nm.

  20. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  1. Accurate calculation of the K photoionization around the minimum near threshold

    NASA Astrophysics Data System (ADS)

    Theodosiou, Constantine

    2003-05-01

    The accurate prediction of the location of Cooper minima in the photoionization cross sections of alkali metal atoms have been used in the past as a refined test for theoretical calculations. The older measuerements of Hudson and Carter(JOSA 57, 1471 (1967)) for potassium were drastically improved near the minimum by Sandner et al. (Phys. Rev. A 23, 2732 (1981)). The latter work found good overall agreement with the most accurate calculations, but the observed minimum had an overall shift and is clearly narrower than the calculated one. We have revisited the theoretical treatment within the Coulomb approximation with a central potential core approach (CACP)(Phys. Rev. A 30, 2881 (1984)), treating carefully the relativistic effects. We find excellent agreement with the measurements of Sandner et al.. Our study indicates that the improvement stems from the separate treatment of the ɛ p_3/2 and ɛ p_1/2 partial photoionization cross sections, in addition to the inclusion of a realistic central potential to describe the ion core.

  2. Accurate Transposable Element Annotation Is Vital When Analyzing New Genome Assemblies

    PubMed Central

    Platt, Roy N.; Blanco-Berdugo, Laura; Ray, David A.

    2016-01-01

    Transposable elements (TEs) are mobile genetic elements with the ability to replicate themselves throughout the host genome. In some taxa TEs reach copy numbers in hundreds of thousands and can occupy more than half of the genome. The increasing number of reference genomes from nonmodel species has begun to outpace efforts to identify and annotate TE content and methods that are used vary significantly between projects. Here, we demonstrate variation that arises in TE annotations when less than optimal methods are used. We found that across a variety of taxa, the ability to accurately identify TEs based solely on homology decreased as the phylogenetic distance between the queried genome and a reference increased. Next we annotated repeats using homology alone, as is often the case in new genome analyses, and a combination of homology and de novo methods as well as an additional manual curation step. Reannotation using these methods identified a substantial number of new TE subfamilies in previously characterized genomes, recognized a higher proportion of the genome as repetitive, and decreased the average genetic distance within TE families, implying recent TE accumulation. Finally, these finding—increased recognition of younger TEs—were confirmed via an analysis of the postman butterfly (Heliconius melpomene). These observations imply that complete TE annotation relies on a combination of homology and de novo–based repeat identification, manual curation, and classification and that relying on simple, homology-based methods is insufficient to accurately describe the TE landscape of a newly sequenced genome. PMID:26802115

  3. Arab observations

    NASA Astrophysics Data System (ADS)

    Fatoohi, L. J.

    There are two main medieval Arab sources of astronomical observations: chronicles and astronomical treatises. Medieval Arabs produced numerous chronicles many of which reported astronomical events that the chroniclers themselves observed or were witnessed by others. Astronomical phenomena that were recorded by chroniclers include solar and lunar eclipses, cometary apparitions, meteors, and meteor showers. Muslim astronomers produced many astronomical treatises known as zijes. Zijes include records of mainly predictable phenomena, such as eclipses of the Sun and Moon. Unlike chronicles, zijes usually ignore irregular phenomena such as the apparitions of comets and meteors, and meteor showers. Some zijes include astronomical observations, especially of eclipses. Not unexpectedly, records in zijes are in general more accurate than their counterparts in chronicles. However, research has shown that medieval Arab chronicles and zijes both contain some valuable astronomical observational data. Unfortunately, much of the heritage of medieval Arab chroniclers and astronomers is still in manuscript form. Moreover, most of the huge numbers of Arabic manuscripts that exist in various libraries, especially in Arab countries, are still uncatalogued. Until now there is only one catalogue of zijes which was compiled in the fifties and which includes brief comments on 200 zijes. There is a real need for systematic investigation of medieval Arab historical and astronomical manuscripts which exist in many libraries all over the world.

  4. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  5. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  6. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  7. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  8. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  9. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  10. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  11. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  12. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  13. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  14. Accurate ab initio Quartic Force Fields of Cyclic and Bent HC2N Isomers

    NASA Technical Reports Server (NTRS)

    Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J.

    2012-01-01

    Highly correlated ab initio quartic force field (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC2N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted CCSD(T). Dunning s correlation-consistent basis sets cc-pVXZ, X=3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schr dinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by perturbation theory). On the other hand, this procedure (a QFF together with either perturbation theory or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC2N. All three isomers possess significant dipole moments, 3.05D, 3.06D, and 1.71D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.

  15. Accurate and efficient loop selections by the DFIRE-based all-atom statistical potential.

    PubMed

    Zhang, Chi; Liu, Song; Zhou, Yaoqi

    2004-02-01

    The conformations of loops are determined by the water-mediated interactions between amino acid residues. Energy functions that describe the interactions can be derived either from physical principles (physical-based energy function) or statistical analysis of known protein structures (knowledge-based statistical potentials). It is commonly believed that statistical potentials are appropriate for coarse-grained representation of proteins but are not as accurate as physical-based potentials when atomic resolution is required. Several recent applications of physical-based energy functions to loop selections appear to support this view. In this article, we apply a recently developed DFIRE-based statistical potential to three different loop decoy sets (RAPPER, Jacobson, and Forrest-Woolf sets). Together with a rotamer library for side-chain optimization, the performance of DFIRE-based potential in the RAPPER decoy set (385 loop targets) is comparable to that of AMBER/GBSA for short loops (two to eight residues). The DFIRE is more accurate for longer loops (9 to 12 residues). Similar trend is observed when comparing DFIRE with another physical-based OPLS/SGB-NP energy function in the large Jacobson decoy set (788 loop targets). In the Forrest-Woolf decoy set for the loops of membrane proteins, the DFIRE potential performs substantially better than the combination of the CHARMM force field with several solvation models. The results suggest that a single-term DFIRE-statistical energy function can provide an accurate loop prediction at a fraction of computing cost required for more complicate physical-based energy functions. A Web server for academic users is established for loop selection at the softwares/services section of the Web site http://theory.med.buffalo.edu/.

  16. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  17. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  18. Using Neural Networks to Describe Complex Phase Transformation Behavior

    SciTech Connect

    Vitek, J.M.; David, S.A.

    1999-05-24

    Final microstructures can often be the end result of a complex sequence of phase transformations. Fundamental analyses may be used to model various stages of the overall behavior but they are often impractical or cumbersome when considering multicomponent systems covering a wide range of compositions. Neural network analysis may be a useful alternative method of identifying and describing phase transformation beavior. A neural network model for ferrite prediction in stainless steel welds is described. It is shown that the neural network analysis provides valuable information that accounts for alloying element interactions. It is suggested that neural network analysis may be extremely useful for analysis when more fundamental approaches are unavailable or overly burdensome.

  19. Describing baseball pitch movement with right-hand rules.

    PubMed

    Bahill, A Terry; Baldwin, David G

    2007-07-01

    The right-hand rules show the direction of the spin-induced deflection of baseball pitches: thus, they explain the movement of the fastball, curveball, slider and screwball. The direction of deflection is described by a pair of right-hand rules commonly used in science and engineering. Our new model for the magnitude of the lateral spin-induced deflection of the ball considers the orientation of the axis of rotation of the ball relative to the direction in which the ball is moving. This paper also describes how models based on somatic metaphors might provide variability in a pitcher's repertoire.

  20. Motivating operations and terms to describe them: some further refinements.

    PubMed Central

    Laraway, Sean; Snycerski, Susan; Michael, Jack; Poling, Alan

    2003-01-01

    Over the past decade, behavior analysts have increasingly used the term establishing operation (EO) to refer to environmental events that influence the behavioral effects of operant consequences. Nonetheless, some elements of current terminology regarding EOs may interfere with applied behavior analysts' efforts to predict, control, describe, and understand behavior. The present paper (a) describes how the current conceptualization of the EO is in need of revision, (b) suggests alternative terms, including the generic term motivating operation (MO), and (c) provides examples of MOs and their behavioral effects using articles from the applied behavior analysis literature. PMID:14596584

  1. Recursive analytical solution describing artificial satellite motion perturbed by an arbitrary number of zonal terms

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An analytical first order solution has been developed which describes the motion of an artificial satellite perturbed by an arbitrary number of zonal harmonics of the geopotential. A set of recursive relations for the solution, which was deduced from recursive relations of the geopotential, was derived. The method of solution is based on Von-Zeipel's technique applied to a canonical set of two-body elements in the extended phase space which incorporates the true anomaly as a canonical element. The elements are of Poincare type, that is, they are regular for vanishing eccentricities and inclinations. Numerical results show that this solution is accurate to within a few meters after 500 revolutions.

  2. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  3. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  4. 25. VIEW LOOKING EAST THROUGH 'TUNNEL' DESCRIBED ABOVE. RAILCAR LOADING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. VIEW LOOKING EAST THROUGH 'TUNNEL' DESCRIBED ABOVE. RAILCAR LOADING TUBES AT TOP FOREGROUND, SPERRY CORN ELEVATOR COMPLEX AT RIGHT AND ADJOINING WAREHOUSE AT LEFT - Sperry Corn Elevator Complex, Weber Avenue (North side), West of Edison Street, Stockton, San Joaquin County, CA

  5. Describing Acupuncture: A New Challenge for Technical Communicators.

    ERIC Educational Resources Information Center

    Karanikas, Marianthe

    1997-01-01

    Considers acupuncture as an increasingly popular alternative medical therapy, but difficult to describe in technical communication. Notes that traditional Chinese medical explanations of acupuncture are unscientific, and that scientific explanations of acupuncture are inconclusive. Finds that technical communicators must translate acupuncture for…

  6. 23. FISH CONVEYOR Conveyor described in Photo No. 21. A ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. FISH CONVEYOR Conveyor described in Photo No. 21. A portion of a second conveyor is seen on the left. Vertical post knocked askew and cracked cement base of the conveyor, attest to the condition of the building. - Hovden Cannery, 886 Cannery Row, Monterey, Monterey County, CA

  7. Superintendents Describe Their Leadership Styles: Implications for Practice

    ERIC Educational Resources Information Center

    Bird, James J.; Wang, Chuang

    2013-01-01

    Superintendents from eight southeastern United States school districts self-described their leadership styles across the choices of autocratic, laissez-faire, democratic, situational, servant, or transformational. When faced with this array of choices, the superintendents chose with arguable equitableness, indicating that successful leaders can…

  8. Comparing Theoretical Perspectives in Describing Mathematics Departments: Complexity and Activity

    ERIC Educational Resources Information Center

    Beswick, Kim; Watson, Anne; De Geest, Els

    2010-01-01

    We draw on two studies of mathematics departments in 11-18 comprehensive maintained schools in England to compare and contrast the insights provided by differing theoretical perspectives. In one study, activity theory was used to describe common features of the work of three departments. In the other, a mathematics department was viewed and…

  9. Describing an "Effective" Principal: Perceptions of the Central Office Leaders

    ERIC Educational Resources Information Center

    Parylo, Oksana; Zepeda, Sally J.

    2014-01-01

    The purpose of this qualitative study was to examine how district leaders of two school systems in the USA describe an effective principal. Membership categorisation analysis revealed that district leaders believed an effective principal had four major categories of characteristics: (1) documented characteristics (having a track record and being a…

  10. Describing NAEP Achievement Levels with Multiple Domain Scores.

    ERIC Educational Resources Information Center

    Schulz, E. Matthew; Lee, Won-Chan

    This study was conducted to demonstrate the potential for using multiple domains to describe achievement levels in the National Assessment of Educational Progress (NAEP) mathematics test. Mathematics items from the NAEP grade 8 assessment for the year 2000 were used. Curriculum experts provided ratings of when the skills required to answer the…

  11. Judgments about Forces in Described Interactions between Objects

    ERIC Educational Resources Information Center

    White, Peter A.

    2011-01-01

    In 4 experiments, participants made judgments about forces exerted and resistances put up by objects involved in described interactions. Two competing hypotheses were tested: (1) that judgments are derived from the same knowledge base that is thought to be the source of perceptual impressions of forces that occur with visual stimuli, and (2) that…

  12. Describing Soils: Calibration Tool for Teaching Soil Rupture Resistance

    ERIC Educational Resources Information Center

    Seybold, C. A.; Harms, D. S.; Grossman, R. B.

    2009-01-01

    Rupture resistance is a measure of the strength of a soil to withstand an applied stress or resist deformation. In soil survey, during routine soil descriptions, rupture resistance is described for each horizon or layer in the soil profile. The lower portion of the rupture resistance classes are assigned based on rupture between thumb and…

  13. Learning Communities and Community Development: Describing the Process.

    ERIC Educational Resources Information Center

    Moore, Allen B.; Brooks, Rusty

    2000-01-01

    Describes features of learning communities: they transform themselves, share wisdom and recognition, bring others in, and share results. Provides the case example of the Upper Savannah River Economic Coalition. Discusses actions of learning communities, barriers to their development, and future potential. (SK)

  14. Method for describing fractures in subterranean earth formations

    DOEpatents

    Shuck, Lowell Z.

    1977-01-01

    The configuration and directional orientation of natural or induced fractures in subterranean earth formations are described by introducing a liquid explosive into the fracture, detonating the explosive, and then monitoring the resulting acoustic emissions with strategically placed acoustic sensors as the explosion propagates through the fracture at a known rate.

  15. College Students' Judgment of Others Based on Described Eating Pattern

    ERIC Educational Resources Information Center

    Pearson, Rebecca; Young, Michael

    2008-01-01

    Background: The literature available on attitudes toward eating patterns and people choosing various foods suggests the possible importance of "moral" judgments and desirable personality characteristics associated with the described eating patterns. Purpose: This study was designed to replicate and extend a 1993 study of college students'…

  16. An Evolving Framework for Describing Student Engagement in Classroom Activities

    ERIC Educational Resources Information Center

    Azevedo, Flavio S.; diSessa, Andrea A.; Sherin, Bruce L.

    2012-01-01

    Student engagement in classroom activities is usually described as a function of factors such as human needs, affect, intention, motivation, interests, identity, and others. We take a different approach and develop a framework that models classroom engagement as a function of students' "conceptual competence" in the "specific content" (e.g., the…

  17. Describing Elementary Teachers' Operative Systems: A Case Study

    ERIC Educational Resources Information Center

    Dotger, Sharon; McQuitty, Vicki

    2014-01-01

    This case study introduces the notion of an operative system to describe elementary teachers' knowledge and practice. Drawing from complex systems theory, the operative system is defined as the network of knowledge and practices that constituted teachers' work within a lesson study cycle. Data were gathered throughout a lesson study…

  18. Describing temperament in an ungulate: a multidimensional approach.

    PubMed

    Graunke, Katharina L; Nürnberg, Gerd; Repsilber, Dirk; Puppe, Birger; Langbein, Jan

    2013-01-01

    Studies on animal temperament have often described temperament using a one-dimensional scale, whereas theoretical framework has recently suggested two or more dimensions using terms like "valence" or "arousal" to describe these dimensions. Yet, the valence or assessment of a situation is highly individual. The aim of this study was to provide support for the multidimensional framework with experimental data originating from an economically important species (Bos taurus). We tested 361 calves at 90 days post natum (dpn) in a novel-object test. Using a principal component analysis (PCA), we condensed numerous behaviours into fewer variables to describe temperament and correlated these variables with simultaneously measured heart rate variability (HRV) data. The PCA resulted in two behavioural dimensions (principal components, PC): novel-object-related (PC 1) and exploration-activity-related (PC 2). These PCs explained 58% of the variability in our data. The animals were distributed evenly within the two behavioural dimensions independent of their sex. Calves with different scores in these PCs differed significantly in HRV, and thus in the autonomous nervous system's activity. Based on these combined behavioural and physiological data we described four distinct temperament types resulting from two behavioural dimensions: "neophobic/fearful--alert", "interested--stressed", "subdued/uninterested--calm", and "neoophilic/outgoing--alert". Additionally, 38 calves were tested at 90 and 197 dpn. Using the same PCA-model, they correlated significantly in PC 1 and tended to correlate in PC 2 between the two test ages. Of these calves, 42% expressed a similar behaviour pattern in both dimensions and 47% in one. No differences in temperament scores were found between sexes or breeds. In conclusion, we described distinct temperament types in calves based on behavioural and physiological measures emphasising the benefits of a multidimensional approach.

  19. How to accurately detect autobiographical events.

    PubMed

    Sartori, Giuseppe; Agosta, Sara; Zogmaister, Cristina; Ferrara, Santo Davide; Castiello, Umberto

    2008-08-01

    We describe a new method, based on indirect measures of implicit autobiographical memory, that allows evaluation of which of two contrasting autobiographical events (e.g., crimes) is true for a given individual. Participants were requested to classify sentences describing possible autobiographical events by pressing one of two response keys. Responses were faster when sentences related to truly autobiographical events shared the same response key with other sentences reporting true events and slower when sentences related to truly autobiographical events shared the same response key with sentences reporting false events. This method has possible application in forensic settings and as a lie-detection technique.

  20. A Visual Metaphor Describing Neural Dynamics in Schizophrenia

    PubMed Central

    van Beveren, Nico J. M.; de Haan, Lieuwe

    2008-01-01

    Background In many scientific disciplines the use of a metaphor as an heuristic aid is not uncommon. A well known example in somatic medicine is the ‘defense army metaphor’ used to characterize the immune system. In fact, probably a large part of the everyday work of doctors consists of ‘translating’ scientific and clinical information (i.e. causes of disease, percentage of succes versus risk of side-effects) into information tailored to the needs and capacities of the individual patient. The ability to do so in an effective way is at least partly what makes a clinician a good communicator. Schizophrenia is a severe psychiatric disorder which affects approximately 1% of the population. Over the last two decades a large amount of molecular-biological, imaging and genetic data have been accumulated regarding the biological underpinnings of schizophrenia. However, it remains difficult to understand how the characteristic symptoms of schizophrenia such as hallucinations and delusions are related to disturbances on the molecular-biological level. In general, psychiatry seems to lack a conceptual framework with sufficient explanatory power to link the mental- and molecular-biological domains. Methodology/Principal Findings Here, we present an essay-like study in which we propose to use visualized concepts stemming from the theory on dynamical complex systems as a ‘visual metaphor’ to bridge the mental- and molecular-biological domains in schizophrenia. We first describe a computer model of neural information processing; we show how the information processing in this model can be visualized, using concepts from the theory on complex systems. We then describe two computer models which have been used to investigate the primary theory on schizophrenia, the neurodevelopmental model, and show how disturbed information processing in these two computer models can be presented in terms of the visual metaphor previously described. Finally, we describe the effects of

  1. How Accurate Can Enrollment Forecasting Be?

    ERIC Educational Resources Information Center

    Shaw, Robert C.

    1980-01-01

    After briefly describing several methods of projecting enrollments, cites research indicating that the cohort survival method is best used as a relatively short-range forecast where in-migration and out-migration ratios are expected to remain fairly stable or to change at the same rate as they have in the recent past. (Author/IRT)

  2. Whipple Observations

    NASA Astrophysics Data System (ADS)

    Trangsrud, A.

    2015-12-01

    The solar system that we know today was shaped dramatically by events in its dynamic formative years. These events left their signatures at the distant frontier of the solar system, in the small planetesimal relics that populate the vast Oort Cloud, the Scattered Disk, and the Kuiper Belt. To peer in to the history and evolution of our solar system, the Whipple mission will survey small bodies in the large volume that begins beyond the orbit of Neptune and extends out to thousands of AU. Whipple detects these objects when they occult distant stars. The distance and size of the occulting object is reconstructed from well-understood diffraction effects in the object's shadow. Whipple will observe tens of thousands of stars simultaneously with high observing efficiency, accumulating roughly a billion "star-hours" of observations over its mission life. Here we describe the Whipple observing strategy, including target selection and scheduling.

  3. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  4. A model describing vestibular detection of body sway motion.

    NASA Technical Reports Server (NTRS)

    Nashner, L. M.

    1971-01-01

    An experimental technique was developed which facilitated the formulation of a quantitative model describing vestibular detection of body sway motion in a postural response mode. All cues, except vestibular ones, which gave a subject an indication that he was beginning to sway, were eliminated using a specially designed two-degree-of-freedom platform; body sway was then induced and resulting compensatory responses at the ankle joints measured. Hybrid simulation compared the experimental results with models of the semicircular canals and utricular otolith receptors. Dynamic characteristics of the resulting canal model compared closely with characteristics of models which describe eye movement and subjective responses to body rotational motions. The average threshold level, in the postural response mode, however, was considerably lower. Analysis indicated that the otoliths probably play no role in the initial detection of body sway motion.

  5. Plasma Approach to Describing the Electric Dynamics of a Neuron

    SciTech Connect

    Berezin, A. A.

    2002-07-15

    The electric excitation of a neuron is interpreted as the formation of a nonlinear solitary ion acoustic wave of the charge density of sodium and hydrogen ions in an electrolytic intracellular fluid, which is treated as a dense plasma. It is shown that such a wave can be described by the coupled sine-Gordon and Korteweg-de Vries equations, having a solution in the form of a soliton whose internal vibrational structure is described by the Fermi-Pasta-Ulam spectrum. It is concluded that a nerve impulse can be interpreted as a low-frequency solitary wave of the charge density of sodium ions with a trapped high-frequency charge density wave of protons.

  6. A gene feature enumeration approach for describing HLA allele polymorphism.

    PubMed

    Mack, Steven J

    2015-12-01

    HLA genotyping via next generation sequencing (NGS) poses challenges for the use of HLA allele names to analyze and discuss sequence polymorphism. NGS will identify many new synonymous and non-coding HLA sequence variants. Allele names identify the types of nucleotide polymorphism that define an allele (non-synonymous, synonymous and non-coding changes), but do not describe how polymorphism is distributed among the individual features (the flanking untranslated regions, exons and introns) of a gene. Further, HLA alleles cannot be named in the absence of antigen-recognition domain (ARD) encoding exons. Here, a system for describing HLA polymorphism in terms of HLA gene features (GFs) is proposed. This system enumerates the unique nucleotide sequences for each GF in an HLA gene, and records these in a GF enumeration notation that allows both more granular dissection of allele-level HLA polymorphism and the discussion and analysis of GFs in the absence of ARD-encoding exon sequences.

  7. Psathyloma, a new genus in Hymenogastraceae described from New Zealand.

    PubMed

    Soop, Karl; Dima, Bálint; Szarkándi, János Gergő; Cooper, Jerry; Papp, Tamás; Vágvölgyi, Csaba; Nagy, László G

    2016-01-01

    A new genus Psathyloma is described based on collections of agarics from New Zealand. We describe two new species in the genus, Ps. leucocarpum and Ps. catervatim, both of which have been known and tentatively named for a long time awaiting a formal description. Morphological traits and phylogenetic analyses reveal that Psathyloma forms a strongly supported sister clade to Hebeloma, Naucoria and Hymenogaster Morphologically Psathyloma resembles Hebeloma from which it differs mainly by producing smooth basidiospores with a germ pore. The geographical range of the genus has been demonstrated to include several regions in the southern hemisphere. A survey of published environmental sequences reveals that Psathyloma spp. were isolated from ectomycorrhizal root tips from Tasmania and Argentina, indicating an ectomycorrhizal association with southern beech.

  8. Curie law for systems described by kappa distributions

    NASA Astrophysics Data System (ADS)

    Livadiotis, George

    2016-01-01

    We derive the magnetization of a system, Pierre Curie's law, for paramagnetic particles out of thermal equilibrium described by kappa distributions. The analysis uses the theory and formulation of the kappa distributions that describe particle systems with a non-zero potential energy. Among other results, emphasis is placed on the effect of kappa distribution on the phenomenon of having strong magnetization at high temperatures. At thermal equilibrium, high temperature leads to weak magnetization. Out of thermal equilibrium, however, strong magnetization at high temperatures is rather possible, if the paramagnetic particle systems reside far from thermal equilibrium, i.e., at small values of kappa. The application of the theory to the space plasma at the outer boundaries of our heliosphere, the inner heliosheath, leads to an estimation of the ion magnetic moment for this space plasma, that is, μ ≈ 138+/-7 \\text{eV/nT} .

  9. Describing sport grounds: an investigation of 'functional' and 'acquaintance' familiarity.

    PubMed

    Peron, E M; Baroni, M R; Falchero, S

    1991-10-01

    The present research was designed to investigate the concept of familiarity and how different kinds of familiarity could affect the coding and memory of places having specific and strong functional significance, i.e., sport courts. Tennis and basketball were selected. Users and nonusers of such sport courts had first to describe a sport court taking the necessary information from their stored schematic knowledge and then to describe a sport court previously seen in a photograph. Subjects' verbal reports showed a certain superiority of users' performance, a commonly found place effect, and the presence of errors only on the second task and mainly by the users group. The results are discussed in terms of the environmental schemata theory and of the different kinds of familiarity considered. PMID:1766791

  10. A geostatistical approach for describing spatial pattern in stream networks

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  11. Describing spatial pattern in stream networks: A practical approach

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  12. Describing sport grounds: an investigation of 'functional' and 'acquaintance' familiarity.

    PubMed

    Peron, E M; Baroni, M R; Falchero, S

    1991-10-01

    The present research was designed to investigate the concept of familiarity and how different kinds of familiarity could affect the coding and memory of places having specific and strong functional significance, i.e., sport courts. Tennis and basketball were selected. Users and nonusers of such sport courts had first to describe a sport court taking the necessary information from their stored schematic knowledge and then to describe a sport court previously seen in a photograph. Subjects' verbal reports showed a certain superiority of users' performance, a commonly found place effect, and the presence of errors only on the second task and mainly by the users group. The results are discussed in terms of the environmental schemata theory and of the different kinds of familiarity considered.

  13. An alternative to soil taxonomy for describing key soil characteristics

    USGS Publications Warehouse

    Duniway, Michael C.; Miller, Mark E.; Brown, Joel R.; Toevs, Gordon

    2013-01-01

    is not a simple task. Furthermore, because the US system of soil taxonomy is not applied universally, its utility as a means for effectively describing soil characteristics to readers in other countries is limited. Finally, and most importantly, even at the finest level of soil classification there are often large within-taxa variations in critical properties that can determine ecosystem responses to drivers such as climate and land-use change.

  14. Polychaete species (Annelida) described from the Philippine and China Seas.

    PubMed

    Salazar-Vallejo, Sergio I; Carrera-Parra, Luis F; Muir, Alexander I; De León-González, Jesús Angel; Piotrowski, Christina; Sato, Masanori

    2014-07-30

    The South China and Philippine Seas are among the most diverse regions in the Western Pacific. Although there are several local polychaete checklists available, there is none comprising the whole of this region. Presented herein is a comprehensive list of the original names of all polychaete species described from the region. The list contains 1037 species, 345 genera and 60 families; the type locality, type depository, and information regarding synonymy are presented for each species. 

  15. Describing depression: Congruence between patient experiences and clinical assessments

    PubMed Central

    Kelly, Morgen A. R.; Morse, Jennifer Q.; Stover, Angela; Hofkens, Tara; Huisman, Emily; Shulman, Stuart; Eisen, Susan V.; Becker, Sara J.; Weinfurt, Kevin; Boland, Elaine; Pilkonis, Paul A.

    2011-01-01

    Objectives Efforts to describe depression have relied on top-down methods in which theory and clinical experience define depression but may not reflect the individuals’ experiences with depression. We assessed the degree of overlap between academic descriptions of depression and patient-reported symptoms as conceptualized in the Patient-Reported Outcomes Measurement Information System® (PROMIS®). By extension, this work assesses the degree of overlap between current clinical descriptions of depression and patient-reported symptoms. Design In this content analysis study, four focus groups were conducted across two sites to elicit symptoms and the experience of depression from depressed and medically ill patients. Methods Depressed and medically ill patients were asked to describe symptoms that characterize depression. Data were transcribed and then coded using an a priori list of 43 facets of depression derived from extant depression measures. Results Participants described 93% of the symptoms from the a priori list, supporting the validity of current depression measures. Interpersonal difficulties were underscored as was anger. In general, results from the focus groups did not require the generation of new items for depression and supported the content validity of the PROMIS hierarchical framework and item pool created originally. Conclusions This work supports the validity of current depression assessment, but suggests further investigation of interpersonal functioning and anger may add to the depth and breadth of depression assessment. PMID:21332520

  16. Describing relevant indices from the resting state electrophysiological networks.

    PubMed

    Toppi, J; Petti, M; De Vico Fallani, F; Vecchiato, G; Maglione, A G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L

    2012-01-01

    The "Default Mode Network" concept was defined, in fMRI field, as a consistent pattern, involving some regions of the brain, which is active during resting state activity and deactivates during attention demanding or goal-directed tasks. Several fMRI studies described its features also correlating the deactivations with the attentive load required for the task execution. Despite the efforts in EEG field, aiming at correlating the spectral features of EEG signals with DMN, an electrophysiological correlate of the DMN hasn't yet been found. In this study we used advanced techniques for functional connectivity estimation for describing the neuroelectrical properties of DMN. We analyzed the connectivity patterns elicited during the rest condition by 55 healthy subjects by means of Partial Directed Coherence. We extracted some graph indexes in order to describe the properties of the resting network in terms of local and global efficiencies, symmetries and influences between different regions of the scalp. Results highlighted the presence of a consistent network, elicited by more than 70% of analyzed population, involving mainly frontal and parietal regions. The properties of the resting network are uniform among the population and could be used for the construction of a normative database for the identification of pathological conditions.

  17. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  18. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  19. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  20. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  1. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  2. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  3. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  4. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT.

  5. A fast and accurate decoder for underwater acoustic telemetry

    NASA Astrophysics Data System (ADS)

    Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  6. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162

  7. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  8. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  9. HERMES: A Model to Describe Deformation, Burning, Explosion, and Detonation

    SciTech Connect

    Reaugh, J E

    2011-11-22

    HERMES (High Explosive Response to MEchanical Stimulus) was developed to fill the need for a model to describe an explosive response of the type described as BVR (Burn to Violent Response) or HEVR (High Explosive Violent Response). Characteristically this response leaves a substantial amount of explosive unconsumed, the time to reaction is long, and the peak pressure developed is low. In contrast, detonations characteristically consume all explosive present, the time to reaction is short, and peak pressures are high. However, most of the previous models to describe explosive response were models for detonation. The earliest models to describe the response of explosives to mechanical stimulus in computer simulations were applied to intentional detonation (performance) of nearly ideal explosives. In this case, an ideal explosive is one with a vanishingly small reaction zone. A detonation is supersonic with respect to the undetonated explosive (reactant). The reactant cannot respond to the pressure of the detonation before the detonation front arrives, so the precise compressibility of the reactant does not matter. Further, the mesh sizes that were practical for the computer resources then available were large with respect to the reaction zone. As a result, methods then used to model detonations, known as {beta}-burn or program burn, were not intended to resolve the structure of the reaction zone. Instead, these methods spread the detonation front over a few finite-difference zones, in the same spirit that artificial viscosity is used to spread the shock front in inert materials over a few finite-difference zones. These methods are still widely used when the structure of the reaction zone and the build-up to detonation are unimportant. Later detonation models resolved the reaction zone. These models were applied both to performance, particularly as it is affected by the size of the charge, and to situations in which the stimulus was less than that needed for reliable

  10. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  11. Accurate Determination of Conformational Transitions in Oligomeric Membrane Proteins

    PubMed Central

    Sanz-Hernández, Máximo; Vostrikov, Vitaly V.; Veglia, Gianluigi; De Simone, Alfonso

    2016-01-01

    The structural dynamics governing collective motions in oligomeric membrane proteins play key roles in vital biomolecular processes at cellular membranes. In this study, we present a structural refinement approach that combines solid-state NMR experiments and molecular simulations to accurately describe concerted conformational transitions identifying the overall structural, dynamical, and topological states of oligomeric membrane proteins. The accuracy of the structural ensembles generated with this method is shown to reach the statistical error limit, and is further demonstrated by correctly reproducing orthogonal NMR data. We demonstrate the accuracy of this approach by characterising the pentameric state of phospholamban, a key player in the regulation of calcium uptake in the sarcoplasmic reticulum, and by probing its dynamical activation upon phosphorylation. Our results underline the importance of using an ensemble approach to characterise the conformational transitions that are often responsible for the biological function of oligomeric membrane protein states. PMID:26975211

  12. Accurate multiplex gene synthesis from programmable DNA microchips

    NASA Astrophysics Data System (ADS)

    Tian, Jingdong; Gong, Hui; Sheng, Nijing; Zhou, Xiaochuan; Gulari, Erdogan; Gao, Xiaolian; Church, George

    2004-12-01

    Testing the many hypotheses from genomics and systems biology experiments demands accurate and cost-effective gene and genome synthesis. Here we describe a microchip-based technology for multiplex gene synthesis. Pools of thousands of `construction' oligonucleotides and tagged complementary `selection' oligonucleotides are synthesized on photo-programmable microfluidic chips, released, amplified and selected by hybridization to reduce synthesis errors ninefold. A one-step polymerase assembly multiplexing reaction assembles these into multiple genes. This technology enabled us to synthesize all 21 genes that encode the proteins of the Escherichia coli 30S ribosomal subunit, and to optimize their translation efficiency in vitro through alteration of codon bias. This is a significant step towards the synthesis of ribosomes in vitro and should have utility for synthetic biology in general.

  13. Experimental verification of a model describing the intensity distribution from a single mode optical fiber

    SciTech Connect

    Moro, Erik A; Puckett, Anthony D; Todd, Michael D

    2011-01-24

    The intensity distribution of a transmission from a single mode optical fiber is often approximated using a Gaussian-shaped curve. While this approximation is useful for some applications such as fiber alignment, it does not accurately describe transmission behavior off the axis of propagation. In this paper, another model is presented, which describes the intensity distribution of the transmission from a single mode optical fiber. A simple experimental setup is used to verify the model's accuracy, and agreement between model and experiment is established both on and off the axis of propagation. Displacement sensor designs based on the extrinsic optical lever architecture are presented. The behavior of the transmission off the axis of propagation dictates the performance of sensor architectures where large lateral offsets (25-1500 {micro}m) exist between transmitting and receiving fibers. The practical implications of modeling accuracy over this lateral offset region are discussed as they relate to the development of high-performance intensity modulated optical displacement sensors. In particular, the sensitivity, linearity, resolution, and displacement range of a sensor are functions of the relative positioning of the sensor's transmitting and receiving fibers. Sensor architectures with high combinations of sensitivity and displacement range are discussed. It is concluded that the utility of the accurate model is in its predicative capability and that this research could lead to an improved methodology for high-performance sensor design.

  14. A proposal to describe a phenomenon of expanding language

    NASA Astrophysics Data System (ADS)

    Swietorzecka, Kordula

    Changes of knowledge, convictions or beliefs are subjects of interest in frame of so called epistemic logic. There are various proposed descriptions of a process (or its results) in which so a called agent may invent certain changes in a set of sentences that he had already chosen as a point of his knowledge, convictions or beliefs (and this is also considered in case of many agents). In the presented paper we are interested in the changeability of an agent's language which is by its own independent from already mentioned changes. Modern epistemic formalizations assume that the agent uses a fixed (and so we could say: static) language in which he expresses his various opinions which may change. Our interest is to simulate a situation when a language is extended by adding to it new expressions which were not known by the agent so he couldn't even consider them as subjects of his opinions. Actually such a phenomenon happens both in natural and scientific languages. Let us mention a fact of expanding languages in process of learning or in result of getting of new data about some described domain. We propose a simple idealization of extending sentential language used by one agent. Actually the language is treated as a family of so called n-languages which get some epistemic interpretation. Proposed semantics enables us to distinguish between two different types of changes - these which occur because of changing agent's convictions about logical values of some n-sentences - we describe them using one place operator C to be read it changes that - and changes that consist in increasing the level of n-language by adding to it new expressions. However the second type of change - symbolized by variable G - may be also considered independently of the first one. The logical frame of our considerations comes from and it was originally used to describe Aristotelian theory of substantial changes. This time we apply the mentioned logic in epistemology.

  15. A Physiology-Based Model Describing Heterogeneity in Glucose Metabolism

    PubMed Central

    Maas, Anne H.; Rozendaal, Yvonne J. W.; van Pul, Carola; Hilbers, Peter A. J.; Cottaar, Ward J.; Haak, Harm R.; van Riel, Natal A. W.

    2014-01-01

    Background: Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. Method: The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. Results: All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. Conclusion: We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. PMID:25526760

  16. Effect of Display Color on Pilot Performance and Describing Functions

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1997-01-01

    A study has been conducted with the full-spectrum, calligraphic, computer-generated display system to determine the effect of chromatic content of the visual display upon pilot performance during the landing approach maneuver. This study utilizes a new digital chromatic display system, which has previously been shown to improve the perceived fidelity of out-the-window display scenes, and presents the results of an experiment designed to determine the effects of display color content by the measurement of both vertical approach performance and pilot-describing functions. This method was selected to more fully explore the effects of visual color cues used by the pilot. Two types of landing approaches were made: dynamic and frozen range, with either a landing approach scene or a perspective array display. The landing approach scene was presented with either red runway lights and blue taxiway lights or with the colors reversed, and the perspective array with red lights, blue lights, or red and blue lights combined. The vertical performance measures obtained in this experiment indicated that the pilots performed best with the blue and red/blue displays. and worst with the red displays. The describing-function system analysis showed more variation with the red displays. The crossover frequencies were lowest with the red displays and highest with the combined red/blue displays, which provided the best overall tracking, performance. Describing-function performance measures, vertical performance measures, and pilot opinion support the hypothesis that specific colors in displays can influence the pilots' control characteristics during the final approach.

  17. [Health consequences of smoking electronic cigarettes are poorly described].

    PubMed

    Tøttenborg, Sandra Søgaard; Holm, Astrid Ledgaard; Wibholm, Niels Christoffer; Lange, Peter

    2014-09-01

    Despite increasing popularity, health consequences of vaping (smoking electronic cigarettes, e-cigarettes) are poorly described. Few studies suggest that vaping has less deleterious effects on lung function than smoking conventional cigarettes. One large study found that e-cigarettes were as efficient as nicotine patches in smoking cessation. The long-term consequences of vaping are however unknown and while some experts are open towards e-cigarettes as a safer way of satisfying nicotine addiction, others worry that vaping in addition to presenting a health hazard may lead to an increased number of smokers of conventional cigarettes.

  18. Feshbach resonance described by boson-fermion coupling

    SciTech Connect

    Domanski, T.

    2003-07-01

    We consider a possibility to describe the Feshbach resonance in terms of the boson-fermion (BF) model. Using such a model, we show that after a gradual disentangling of the boson from fermion subsystem, the resonant-type scattering between fermions is indeed generated. We decouple the subsystems via (a) the single step and (b) the continuous canonical transformation. With the second one, we investigate the feedback effects effectively leading to the finite amplitude of the scattering strength. We study them in detail in the normal T>T{sub c} and superconducting T{<=}T{sub c} states.

  19. Can CA describe collective effects of polluting agents?

    NASA Astrophysics Data System (ADS)

    Troisi, A.

    2015-03-01

    Pollution represents one of the most relevant issues of our time. Several studies are on stage but, generally, they do not consider competitive effects, paying attention only to specific agents and their impact. In this paper, it is suggested a different scheme. At first, it is proposed a formal model of competitive noxious effects. Second, by generalizing a previous algorithm capable of describing urban growth, it is developed a cellular automata (CA) model that provides the effective impact of a variety of pollutants. The final achievement is a simulation tool that can model pollution combined effects and their dynamical evolution in relation to anthropized environments.

  20. Precise and accurate isotopic measurements using multiple-collector ICPMS

    NASA Astrophysics Data System (ADS)

    Albarède, F.; Telouk, Philippe; Blichert-Toft, Janne; Boyet, Maud; Agranier, Arnaud; Nelson, Bruce

    2004-06-01

    measurements are shown to be part of a single population. Second-order corrections seem to be able to improve the precision on 143Nd/ 144Nd measurements. Finally, after discussing a number of potential pitfalls, such as the consequence of peak shape, correlations introduced by counting statistics, and the effect of memory on double-spike methods, we describe an optimal strategy for high-precision and accurate measurements by MC-ICPMS, which involves the repetitive calibration of cup efficiencies and rigorous assessment of mass bias combined with standard-sample bracketing. We suggest that, when these simple guidelines are followed, MC-ICPMS is capable of producing isotopic data precise and accurate to better than 15 ppm.

  1. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  2. Accurate Thermal Conductivities from First Principles

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian

    2015-03-01

    In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).

  3. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  4. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  5. Macro parameters describing the mechanical behavior of classical guitars.

    PubMed

    Elie, Benjamin; Gautier, François; David, Bertrand

    2012-12-01

    Since the 1960s and 1970s, researchers have proposed simplified models using only a few parameters to describe the vibro-acoustical behavior of string instruments in the low-frequency range. This paper presents a method for deriving and estimating a few important parameters or features describing the mechanical behavior of classical guitars over a broader frequency range. These features are selected under the constraint that the measurements may readily be made in the workshop of an instrument maker. The computations of these features use estimates of the modal parameters over a large frequency range, made with the high-resolution subspace ESPRIT algorithm (Estimation of Signal Parameters via Rotational Invariant Techniques) and the signal enumeration technique ESTER (ESTimation of ERror). The methods are applied to experiments on real metal and wood plates and numerical simulations of them. The results on guitars show a nearly constant mode density in the mid- and high-frequency ranges, as it is found for a flat panel. Four features are chosen as characteristic parameters of this equivalent plate: Mass, rigidity, characteristic admittance, and the mobility deviation. Application to a set of 12 guitars indicates that these features are good candidates to discriminate different classes of classical guitars.

  6. Identifying, describing, and expressing emotions after critical incidents in paramedics.

    PubMed

    Halpern, Janice; Maunder, Robert G; Schwartz, Brian; Gurevich, Maria

    2012-02-01

    For paramedics, critical incidents evoke intense emotions and may result in later psychological difficulties. We examined 2 ways to deal with emotions after critical incidents: (a) identifying emotions, and (b) describing and expressing emotions, and their association with recovery from acute stress and psychological symptoms. We surveyed 190 paramedics, examining how impaired capacity to identify and describe emotions (alexithymia) and voluntary expression of emotions during contacts with others in the first 24 hours after the incident were associated with recovery from acute stress and current symptoms of PTSD, depression, burnout, and somatization. Overall alexithymia was not associated with recovery, but the component of difficulty identifying feelings was associated with prolonged physical arousal (χ(2) = 10.1, p = .007). Overall alexithymia and all its components were associated with virtually all current symptoms (correlation coefficients .23-.38, p < .05). Voluntary emotional expression was unrelated to current symptoms. Greater emotional expression was related to greater perceived helpfulness of contacts (χ(2) = 56.8, p < .001). This suggests that identifying emotions may be important in managing occupational stress in paramedics. In contrast, voluntary emotional expression, although perceived as helpful, may not prevent symptoms. These findings may inform education for paramedics in dealing with stress.

  7. Matrix Formalism to Describe Functional States of Transcriptional Regulatory Systems

    PubMed Central

    Price, Nathan D; Joyce, Andrew R; Palsson, Bernhard O

    2006-01-01

    Complex regulatory networks control the transcription state of a genome. These transcriptional regulatory networks (TRNs) have been mathematically described using a Boolean formalism, in which the state of a gene is represented as either transcribed or not transcribed in response to regulatory signals. The Boolean formalism results in a series of regulatory rules for the individual genes of a TRN that in turn can be used to link environmental cues to the transcription state of a genome, thereby forming a complete transcriptional regulatory system (TRS). Herein, we develop a formalism that represents such a set of regulatory rules in a matrix form. Matrix formalism allows for the systemic characterization of the properties of a TRS and facilitates the computation of the transcriptional state of the genome under any given set of environmental conditions. Additionally, it provides a means to incorporate mechanistic detail of a TRS as it becomes available. In this study, the regulatory network matrix, R, for a prototypic TRS is characterized and the fundamental subspaces of this matrix are described. We illustrate how the matrix representation of a TRS coupled with its environment (R*) allows for a sampling of all possible expression states of a given network, and furthermore, how the fundamental subspaces of the matrix provide a way to study key TRS features and may assist in experimental design. PMID:16895435

  8. In their own words: describing Canadian physician leadership.

    PubMed

    Snell, Anita J; Dickson, Graham; Wirtzfeld, Debrah; Van Aerde, John

    2016-07-01

    Purpose This is the first study to compile statistical data to describe the functions and responsibilities of physicians in formal and informal leadership roles in the Canadian health system. This mixed-methods research study offers baseline data relative to this purpose, and also describes physician leaders' views on fundamental aspects of their leadership responsibility. Design/methodology/approach A survey with both quantitative and qualitative fields yielded 689 valid responses from physician leaders. Data from the survey were utilized in the development of a semi-structured interview guide; 15 physician leaders were interviewed. Findings A profile of Canadian physician leadership has been compiled, including demographics; an outline of roles, responsibilities, time commitments and related compensation; and personal factors that support, engage and deter physicians when considering taking on leadership roles. The role of health-care organizations in encouraging and supporting physician leadership is explicated. Practical implications The baseline data on Canadian physician leaders create the opportunity to determine potential steps for improving the state of physician leadership in Canada; and health-care organizations are provided with a wealth of information on how to encourage and support physician leaders. Using the data as a benchmark, comparisons can also be made with physician leadership as practiced in other nations. Originality/value There are no other research studies available that provide the depth and breadth of detail on Canadian physician leadership, and the embedded recommendations to health-care organizations are informed by this in-depth knowledge.

  9. Colour in flux: describing and printing colour in art

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna

    2008-01-01

    This presentation will describe artists, practitioners and scientists, who were interested in developing a deeper psychological, emotional and practical understanding of the human visual system who were working with wavelength, paint and other materials. From a selection of prints at The Prints and Drawings Department at Tate London, the presentation will refer to artists who were motivated by issues relating to how colour pigment was mixed and printed, to interrogate and explain colour perception and colour science, and in art, how artists have used colour to challenge the viewer and how a viewer might describe their experience of colour. The title Colour in Flux refers, not only to the perceptual effect of the juxtaposition of one colour pigment with another, but also to the changes and challenges for the print industry. In the light of screenprinted examples from the 60s and 70s, the presentation will discuss 21 st century ideas on colour and how these notions have informed the Centre for Fine Print Research's (CFPR) practical research in colour printing. The latter part of this presentation will discuss the implications for the need to change methods in mixing inks that moves away from existing colour spaces, from non intuitive colour mixing to bespoke ink sets, colour mixing approaches and colour mixing methods that are not reliant on RGB or CMYK.

  10. Macro parameters describing the mechanical behavior of classical guitars.

    PubMed

    Elie, Benjamin; Gautier, François; David, Bertrand

    2012-12-01

    Since the 1960s and 1970s, researchers have proposed simplified models using only a few parameters to describe the vibro-acoustical behavior of string instruments in the low-frequency range. This paper presents a method for deriving and estimating a few important parameters or features describing the mechanical behavior of classical guitars over a broader frequency range. These features are selected under the constraint that the measurements may readily be made in the workshop of an instrument maker. The computations of these features use estimates of the modal parameters over a large frequency range, made with the high-resolution subspace ESPRIT algorithm (Estimation of Signal Parameters via Rotational Invariant Techniques) and the signal enumeration technique ESTER (ESTimation of ERror). The methods are applied to experiments on real metal and wood plates and numerical simulations of them. The results on guitars show a nearly constant mode density in the mid- and high-frequency ranges, as it is found for a flat panel. Four features are chosen as characteristic parameters of this equivalent plate: Mass, rigidity, characteristic admittance, and the mobility deviation. Application to a set of 12 guitars indicates that these features are good candidates to discriminate different classes of classical guitars. PMID:23231130

  11. In their own words: describing Canadian physician leadership.

    PubMed

    Snell, Anita J; Dickson, Graham; Wirtzfeld, Debrah; Van Aerde, John

    2016-07-01

    Purpose This is the first study to compile statistical data to describe the functions and responsibilities of physicians in formal and informal leadership roles in the Canadian health system. This mixed-methods research study offers baseline data relative to this purpose, and also describes physician leaders' views on fundamental aspects of their leadership responsibility. Design/methodology/approach A survey with both quantitative and qualitative fields yielded 689 valid responses from physician leaders. Data from the survey were utilized in the development of a semi-structured interview guide; 15 physician leaders were interviewed. Findings A profile of Canadian physician leadership has been compiled, including demographics; an outline of roles, responsibilities, time commitments and related compensation; and personal factors that support, engage and deter physicians when considering taking on leadership roles. The role of health-care organizations in encouraging and supporting physician leadership is explicated. Practical implications The baseline data on Canadian physician leaders create the opportunity to determine potential steps for improving the state of physician leadership in Canada; and health-care organizations are provided with a wealth of information on how to encourage and support physician leaders. Using the data as a benchmark, comparisons can also be made with physician leadership as practiced in other nations. Originality/value There are no other research studies available that provide the depth and breadth of detail on Canadian physician leadership, and the embedded recommendations to health-care organizations are informed by this in-depth knowledge. PMID:27397749

  12. A simple, sensitive, and accurate alcohol electrode

    SciTech Connect

    Verduyn, C.; Scheffers, W.A.; Van Dijken, J.P.

    1983-04-01

    The construction and performance of an enzyme electrode is described which specifically detects lower primary aliphatic alcohols in aqueous solutions. The electrode consists of a commercial Clark-type oxygen electrode on which alcohol oxidase (E.C. 1.1.3.13) and catalase were immobilized. The decrease in electrode current is linearly proportional to ethanol concentrations betwee 1 and 25 ppm. The response of the electrode remains constant during 400 assays over a period of two weeks. The response time is between 1 and 2 min. Assembly of the electrode takes less than 1 h.

  13. The Global Geodetic Infrastructure for Accurate Monitoring of Earth Systems

    NASA Astrophysics Data System (ADS)

    Weston, Neil; Blackwell, Juliana; Wang, Yan; Willis, Zdenka

    2014-05-01

    The National Geodetic Survey (NGS) and the Integrated Ocean Observing System (IOOS), two Program Offices within the National Ocean Service, NOAA, routinely collect, analyze and disseminate observations and products from several of the 17 critical systems identified by the U.S. Group on Earth Observations. Gravity, sea level monitoring, coastal zone and ecosystem management, geo-hazards and deformation monitoring and ocean surface vector winds are the primary Earth systems that have active research and operational programs in NGS and IOOS. These Earth systems collect terrestrial data but most rely heavily on satellite-based sensors for analyzing impacts and monitoring global change. One fundamental component necessary for monitoring via satellites is having a stable, global geodetic infrastructure where an accurate reference frame is essential for consistent data collection and geo-referencing. This contribution will focus primarily on system monitoring, coastal zone management and global reference frames and how the scientific contributions from NGS and IOOS continue to advance our understanding of the Earth and the Global Geodetic Observing System.

  14. Describing the impact of health research: a Research Impact Framework

    PubMed Central

    Kuruvilla, Shyama; Mays, Nicholas; Pleasant, Andrew; Walt, Gill

    2006-01-01

    Background Researchers are increasingly required to describe the impact of their work, e.g. in grant proposals, project reports, press releases and research assessment exercises. Specialised impact assessment studies can be difficult to replicate and may require resources and skills not available to individual researchers. Researchers are often hard-pressed to identify and describe research impacts and ad hoc accounts do not facilitate comparison across time or projects. Methods The Research Impact Framework was developed by identifying potential areas of health research impact from the research impact assessment literature and based on research assessment criteria, for example, as set out by the UK Research Assessment Exercise panels. A prototype of the framework was used to guide an analysis of the impact of selected research projects at the London School of Hygiene and Tropical Medicine. Additional areas of impact were identified in the process and researchers also provided feedback on which descriptive categories they thought were useful and valid vis-à-vis the nature and impact of their work. Results We identified four broad areas of impact: I. Research-related impacts; II. Policy impacts; III. Service impacts: health and intersectoral and IV. Societal impacts. Within each of these areas, further descriptive categories were identified. For example, the nature of research impact on policy can be described using the following categorisation, put forward by Weiss: Instrumental use where research findings drive policy-making; Mobilisation of support where research provides support for policy proposals; Conceptual use where research influences the concepts and language of policy deliberations and Redefining/wider influence where research leads to rethinking and changing established practices and beliefs. Conclusion Researchers, while initially sceptical, found that the Research Impact Framework provided prompts and descriptive categories that helped them

  15. Strength in Numbers: Describing the Flooded Area of Isolated Wetlands

    USGS Publications Warehouse

    Lee, Terrie M.; Haag, Kim H.

    2006-01-01

    Thousands of isolated, freshwater wetlands are scattered across the karst1 landscape of central Florida. Most are small (less than 15 acres), shallow, marsh and cypress wetlands that flood and dry seasonally. Wetland health is threatened when wetland flooding patterns are altered either by human activities, such as land-use change and ground-water pumping, or by changes in climate. Yet the small sizes and vast numbers of isolated wetlands in Florida challenge our efforts to characterize them collectively as a statewide water resource. In the northern Tampa Bay area of west-central Florida alone, water levels are measured monthly in more than 400 wetlands by the Southwest Florida Water Management Distirct (SWFWMD). Many wetlands have over a decade of measurements. The usefulness of long-term monitoring of wetland water levels would greatly increase if it described not just the depth of water at a point in the wetland, but also the amount of the total wetland area that was flooded. Water levels can be used to estimate the flooded area of a wetland if the elevation contours of the wetland bottom are determined by bathymetric mapping. Despite the recognized importance of the flooded area to wetland vegetation, bathymetric maps are not available to describe the flooded areas of even a representative number of Florida's isolated wetlands. Information on the bathymetry of isolated wetlands is rare because it is labor intensive to collect the land-surface elevation data needed to create the maps. Five marshes and five cypress wetlands were studied by the U.S. Geological Survey (USGS) during 2000 to 2004 as part of a large interdisciplinary study of isolated wetlands in central Florida. The wetlands are located either in municipal well fields or on publicly owned lands (fig. 1). The 10 wetlands share similar geology and climate, but differ in their ground-water settings. All have historical water-level data and multiple vegetation surveys. A comprehensive report by Haag and

  16. Plant diversity accurately predicts insect diversity in two tropical landscapes.

    PubMed

    Zhang, Kai; Lin, Siliang; Ji, Yinqiu; Yang, Chenxue; Wang, Xiaoyang; Yang, Chunyan; Wang, Hesheng; Jiang, Haisheng; Harrison, Rhett D; Yu, Douglas W

    2016-09-01

    Plant diversity surely determines arthropod diversity, but only moderate correlations between arthropod and plant species richness had been observed until Basset et al. (Science, 338, 2012 and 1481) finally undertook an unprecedentedly comprehensive sampling of a tropical forest and demonstrated that plant species richness could indeed accurately predict arthropod species richness. We now require a high-throughput pipeline to operationalize this result so that we can (i) test competing explanations for tropical arthropod megadiversity, (ii) improve estimates of global eukaryotic species diversity, and (iii) use plant and arthropod communities as efficient proxies for each other, thus improving the efficiency of conservation planning and of detecting forest degradation and recovery. We therefore applied metabarcoding to Malaise-trap samples across two tropical landscapes in China. We demonstrate that plant species richness can accurately predict arthropod (mostly insect) species richness and that plant and insect community compositions are highly correlated, even in landscapes that are large, heterogeneous and anthropogenically modified. Finally, we review how metabarcoding makes feasible highly replicated tests of the major competing explanations for tropical megadiversity. PMID:27474399

  17. A novel automated image analysis method for accurate adipocyte quantification

    PubMed Central

    Osman, Osman S; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J; O’Dowd, Jacqueline F; Cawthorne, Michael A; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-sectional area of adipocytes in histological sections, allowing rapid high-throughput quantification of fat cell size and number. Photomicrographs of H&E-stained paraffin sections of murine gonadal adipose were transformed using standard image processing/analysis algorithms to reduce background and enhance edge-detection. This allowed the isolation of individual adipocytes from which their area could be calculated. Performance was compared with manual measurements made from the same images, in which adipocyte area was calculated from estimates of the major and minor axes of individual adipocytes. Both methods identified an increase in mean adipocyte size in a murine model of obesity, with good concordance, although the calculation used to identify cell area from manual measurements was found to consistently over-estimate cell size. Here we report an accurate method to determine adipocyte area in histological sections that provides a considerable time saving over manual methods. PMID:23991362

  18. Accurate measurements of dynamics and reproducibility in small genetic networks

    PubMed Central

    Dubuis, Julien O; Samanta, Reba; Gregor, Thomas

    2013-01-01

    Quantification of gene expression has become a central tool for understanding genetic networks. In many systems, the only viable way to measure protein levels is by immunofluorescence, which is notorious for its limited accuracy. Using the early Drosophila embryo as an example, we show that careful identification and control of experimental error allows for highly accurate gene expression measurements. We generated antibodies in different host species, allowing for simultaneous staining of four Drosophila gap genes in individual embryos. Careful error analysis of hundreds of expression profiles reveals that less than ∼20% of the observed embryo-to-embryo fluctuations stem from experimental error. These measurements make it possible to extract not only very accurate mean gene expression profiles but also their naturally occurring fluctuations of biological origin and corresponding cross-correlations. We use this analysis to extract gap gene profile dynamics with ∼1 min accuracy. The combination of these new measurements and analysis techniques reveals a twofold increase in profile reproducibility owing to a collective network dynamics that relays positional accuracy from the maternal gradients to the pair-rule genes. PMID:23340845

  19. Automatic and Accurate Shadow Detection Using Near-Infrared Information.

    PubMed

    Rüfenacht, Dominic; Fredembach, Clément; Süsstrunk, Sabine

    2014-08-01

    We present a method to automatically detect shadows in a fast and accurate manner by taking advantage of the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. Dark objects, which confound many shadow detection algorithms, often have much higher reflectance in the NIR. We can thus build an accurate shadow candidate map based on image pixels that are dark both in the visible and NIR representations. We further refine the shadow map by incorporating ratios of the visible to the NIR image, based on the observation that commonly encountered light sources have very distinct spectra in the NIR band. The results are validated on a new database, which contains visible/NIR images for a large variety of real-world shadow creating illuminant conditions, as well as manually labeled shadow ground truth. Both quantitative and qualitative evaluations show that our method outperforms current state-of-the-art shadow detection algorithms in terms of accuracy and computational efficiency.

  20. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    PubMed

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed.

  1. Concepts and methods for describing critical phenomena in fluids

    NASA Technical Reports Server (NTRS)

    Sengers, J. V.; Sengers, J. M. H. L.

    1977-01-01

    The predictions of theoretical models for a critical-point phase transistion in fluids, namely the classical equation with third-degree critical isotherm, that with fifth-degree critical isotherm, and the lattice gas, are reviewed. The renormalization group theory of critical phenomena and the hypothesis of universality of critical behavior supported by this theory are discussed as well as the nature of gravity effects and how they affect cricital-region experimentation in fluids. The behavior of the thermodynamic properties and the correlation function is formulated in terms of scaling laws. The predictions of these scaling laws and of the hypothesis of universality of critical behavior are compared with experimental data for one-component fluids and it is indicated how the methods can be extended to describe critical phenomena in fluid mixtures.

  2. Dynamics of rotating fluids described by scalar potentials

    NASA Astrophysics Data System (ADS)

    Seyed-Mahmoud, Behnam; Rochester, Michael

    2006-06-01

    The oscillatory dynamics of a rotating, self-gravitating, stratified, compressible, inviscid fluid body is simplified by an exact description in terms of three scalar fields which are constructed from the dilatation, and the perturbations in pressure and gravitational potential [Seyed-Mahmoud, B., 1994. Wobble/nutation of a rotating ellipsoidal Earth with liquid core: implementation of a new set of equations describing dynamics of rotating fluids M.Sc. Thesis, Memorial University of Newfoundland]. We test the method by applying it to compressible, but neutrally-stratified, models of the Earth's liquid core, including a solid inner core, and compute the frequencies of some of the inertial modes. We conclude the method should be further exploited for astrophysical and geophysical normal mode computations.

  3. A broadly applicable function for describing luminescence dose response

    SciTech Connect

    Burbidge, C. I.

    2015-07-28

    The basic form of luminescence dose response is investigated, with the aim of developing a single function to account for the appearance of linear, superlinear, sublinear, and supralinear behaviors and variations in saturation signal level and rate. A function is assembled based on the assumption of first order behavior in different major factors contributing to measured luminescence-dosimetric signals. Different versions of the function are developed for standardized and non-dose-normalized responses. Data generated using a two trap two recombination center model and experimental data for natural quartz are analyzed to compare results obtained using different signals, measurement protocols, pretreatment conditions, and radiation qualities. The function well describes a range of dose dependent behavior, including sublinear, superlinear, supralinear, and non-monotonic responses and relative response to α and β radiation, based on change in relative recombination and trapping probability affecting signals sourced from a single electron trap.

  4. Construction of Virtual Psychology Laboratory Describing Exploratory Experimental Behavior

    NASA Astrophysics Data System (ADS)

    Nakaike, Ryuichi; Miwa, Kazuhisa

    In the present study, we show a simulated experiment environment, VPL(Virtual Psychology Laboratory), for visualizing user's exploratory experimental behavior, and present two main modules of the environment: (1) a cognitive simulator and (2) a system for automatically describing experimenter's behavior based on EBS (Exploratory Behavior Schema) proposed by the author. Users use this environment as an experimental psychologist who investigates human collaborative discovery. They experience many trials of conducting experiments using the simulated environment, and analyze by themselves their experimental processes based on the description of their behavior by EBS. It is expected that learners can notice their errors of experimental planning and refine various types of knowledge related to the experimental skills by repeating the experimental activities in this environment.

  5. Parameter uncertainty in biochemical models described by ordinary differential equations.

    PubMed

    Vanlier, J; Tiemann, C A; Hilbers, P A J; van Riel, N A W

    2013-12-01

    Improved mechanistic understanding of biochemical networks is one of the driving ambitions of Systems Biology. Computational modeling allows the integration of various sources of experimental data in order to put this conceptual understanding to the test in a quantitative manner. The aim of computational modeling is to obtain both predictive as well as explanatory models for complex phenomena, hereby providing useful approximations of reality with varying levels of detail. As the complexity required to describe different system increases, so does the need for determining how well such predictions can be made. Despite efforts to make tools for uncertainty analysis available to the field, these methods have not yet found widespread use in the field of Systems Biology. Additionally, the suitability of the different methods strongly depends on the problem and system under investigation. This review provides an introduction to some of the techniques available as well as gives an overview of the state-of-the-art methods for parameter uncertainty analysis.

  6. A framework for describing health care delivery organizations and systems.

    PubMed

    Piña, Ileana L; Cohen, Perry D; Larson, David B; Marion, Lucy N; Sills, Marion R; Solberg, Leif I; Zerzan, Judy

    2015-04-01

    Describing, evaluating, and conducting research on the questions raised by comparative effectiveness research and characterizing care delivery organizations of all kinds, from independent individual provider units to large integrated health systems, has become imperative. Recognizing this challenge, the Delivery Systems Committee, a subgroup of the Agency for Healthcare Research and Quality's Effective Health Care Stakeholders Group, which represents a wide diversity of perspectives on health care, created a draft framework with domains and elements that may be useful in characterizing various sizes and types of care delivery organizations and may contribute to key outcomes of interest. The framework may serve as the door to further studies in areas in which clear definitions and descriptions are lacking.

  7. A new way of describing the Dirac bands in graphene

    NASA Astrophysics Data System (ADS)

    Kissinger, Gregory; Satpathy, Sashi

    We develop a new way of describing the electronic structure of graphene, by treating the honeycomb lattice as a network of one-dimensional quantum wires. The electrons travel as free particles along these quantum wires and interfere at the three-way junctions formed by the carbon atoms. The model generates the linearly dispersive Dirac cone band structure as well as the chiral nature of the pseudo-spin sublattice wave functions. When vacancies are incorporated, we find that it also reproduces the well known zero mode states. This simple approach might have advantages over other methods for some applications, such as in analyzing electronic transport through graphene nanoribbons. In addition, this finding suggests new ways of constructing Dirac band materials in the laboratory by nano-patterning for investigating Dirac fermions.

  8. Diffraction described by virtual particle momentum exchange: the "diffraction force"

    NASA Astrophysics Data System (ADS)

    Mobley, Michael J.

    2011-09-01

    Particle diffraction can be described by an ensemble of particle paths determined through a Fourier analysis of a scattering lattice where the momentum exchange probabilities are defined at the location of scattering, not the point of detection. This description is compatible with optical wave theories and quantum particle models and provides deeper insights to the nature of quantum uncertainty. In this paper the Rayleigh-Sommerfeld and Fresnel-Kirchoff theories are analyzed for diffraction by a narrow slit and a straight edge to demonstrate the dependence of particle scattering on the distance of virtual particle exchange. The quantized momentum exchange is defined by the Heisenberg uncertainty principle and is consistent with the formalism of QED. This exchange of momentum manifests the "diffraction force" that appears to be a universal construct as it applies to neutral and charged particles. This analysis indicates virtual particles might form an exchange channel that bridges the space of momentum exchange.

  9. Method to describe stochastic dynamics using an optimal coordinate.

    PubMed

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function. PMID:24483410

  10. Angular momentum and torque described with the complex octonion

    SciTech Connect

    Weng, Zi-Hua

    2014-08-15

    The paper aims to adopt the complex octonion to formulate the angular momentum, torque, and force etc in the electromagnetic and gravitational fields. Applying the octonionic representation enables one single definition of angular momentum (or torque, force) to combine some physics contents, which were considered to be independent of each other in the past. J. C. Maxwell used simultaneously two methods, the vector terminology and quaternion analysis, to depict the electromagnetic theory. It motivates the paper to introduce the quaternion space into the field theory, describing the physical feature of electromagnetic and gravitational fields. The spaces of electromagnetic field and of gravitational field can be chosen as the quaternion spaces, while the coordinate component of quaternion space is able to be the complex number. The quaternion space of electromagnetic field is independent of that of gravitational field. These two quaternion spaces may compose one octonion space. Contrarily, one octonion space can be separated into two subspaces, the quaternion space and S-quaternion space. In the quaternion space, it is able to infer the field potential, field strength, field source, angular momentum, torque, and force etc in the gravitational field. In the S-quaternion space, it is capable of deducing the field potential, field strength, field source, current continuity equation, and electric (or magnetic) dipolar moment etc in the electromagnetic field. The results reveal that the quaternion space is appropriate to describe the gravitational features, including the torque, force, and mass continuity equation etc. The S-quaternion space is proper to depict the electromagnetic features, including the dipolar moment and current continuity equation etc. In case the field strength is weak enough, the force and the continuity equation etc can be respectively reduced to that in the classical field theory.

  11. Angular momentum and torque described with the complex octonion

    NASA Astrophysics Data System (ADS)

    Weng, Zi-Hua

    2014-08-01

    The paper aims to adopt the complex octonion to formulate the angular momentum, torque, and force etc in the electromagnetic and gravitational fields. Applying the octonionic representation enables one single definition of angular momentum (or torque, force) to combine some physics contents, which were considered to be independent of each other in the past. J. C. Maxwell used simultaneously two methods, the vector terminology and quaternion analysis, to depict the electromagnetic theory. It motivates the paper to introduce the quaternion space into the field theory, describing the physical feature of electromagnetic and gravitational fields. The spaces of electromagnetic field and of gravitational field can be chosen as the quaternion spaces, while the coordinate component of quaternion space is able to be the complex number. The quaternion space of electromagnetic field is independent of that of gravitational field. These two quaternion spaces may compose one octonion space. Contrarily, one octonion space can be separated into two subspaces, the quaternion space and S-quaternion space. In the quaternion space, it is able to infer the field potential, field strength, field source, angular momentum, torque, and force etc in the gravitational field. In the S-quaternion space, it is capable of deducing the field potential, field strength, field source, current continuity equation, and electric (or magnetic) dipolar moment etc in the electromagnetic field. The results reveal that the quaternion space is appropriate to describe the gravitational features, including the torque, force, and mass continuity equation etc. The S-quaternion space is proper to depict the electromagnetic features, including the dipolar moment and current continuity equation etc. In case the field strength is weak enough, the force and the continuity equation etc can be respectively reduced to that in the classical field theory.

  12. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  13. A potential model for methane in water describing correctly the solubility of the gas and the properties of the methane hydrate.

    PubMed

    Docherty, H; Galindo, A; Vega, C; Sanz, E

    2006-08-21

    We have obtained the excess chemical potential of methane in water, over a broad range of temperatures, from computer simulation. The methane molecules are described as simple Lennard-Jones interaction sites, while water is modeled by the recently proposed TIP4P/2005 model. We have observed that the experimental values of the chemical potential are not reproduced when using the Lorentz-Berthelot combining rules. However, we also noticed that the deviation is systematic, suggesting that this may be corrected. In fact, by introducing positive deviations from the energetic Lorentz-Berthelot rule to account indirectly for the polarization methane-water energy, we are able to describe accurately the excess chemical potential of methane in water. Thus, by using a model capable of describing accurately the density of pure water in a wide range of temperatures and by deviating from the Lorentz-Berthelot combining rules, it is possible to reproduce the properties of methane in water at infinite dilution. In addition, we have applied this methane-water potential to the study of the solid methane hydrate structure, commonly denoted as sI, and find that the model describes the experimental value of the unit cell of the hydrate with an error of about 0.2%. Moreover, we have considered the effect of the amount of methane contained in the hydrate. In doing so, we determine that the presence of methane increases slightly the value of the unit cell and decreases slightly the compressibility of the structure. We also note that the presence of methane increases greatly the range of pressures where the sI hydrate is mechanically stable. PMID:16942354

  14. A potential model for methane in water describing correctly the solubility of the gas and the properties of the methane hydrate.

    PubMed

    Docherty, H; Galindo, A; Vega, C; Sanz, E

    2006-08-21

    We have obtained the excess chemical potential of methane in water, over a broad range of temperatures, from computer simulation. The methane molecules are described as simple Lennard-Jones interaction sites, while water is modeled by the recently proposed TIP4P/2005 model. We have observed that the experimental values of the chemical potential are not reproduced when using the Lorentz-Berthelot combining rules. However, we also noticed that the deviation is systematic, suggesting that this may be corrected. In fact, by introducing positive deviations from the energetic Lorentz-Berthelot rule to account indirectly for the polarization methane-water energy, we are able to describe accurately the excess chemical potential of methane in water. Thus, by using a model capable of describing accurately the density of pure water in a wide range of temperatures and by deviating from the Lorentz-Berthelot combining rules, it is possible to reproduce the properties of methane in water at infinite dilution. In addition, we have applied this methane-water potential to the study of the solid methane hydrate structure, commonly denoted as sI, and find that the model describes the experimental value of the unit cell of the hydrate with an error of about 0.2%. Moreover, we have considered the effect of the amount of methane contained in the hydrate. In doing so, we determine that the presence of methane increases slightly the value of the unit cell and decreases slightly the compressibility of the structure. We also note that the presence of methane increases greatly the range of pressures where the sI hydrate is mechanically stable.

  15. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  16. Beyond Rainfall Multipliers: Describing Input Uncertainty as an Autocorrelated Stochastic Process Improves Inference in Hydrology

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Albert, C.; Reichert, P.; Rieckermann, J.

    2015-12-01

    Rainfall is the main driver of hydrological systems. Unfortunately, it is highly variable in space and time and therefore difficult to observe accurately. This poses a serious challenge to correctly estimate the catchment-averaged precipitation, a key factor for hydrological models. As biased precipitation leads to biased parameter estimation and thus to biased runoff predictions, it is very important to have a realistic description of precipitation uncertainty. Rainfall multipliers (RM), which correct each observed storm with a random factor, provide a first step into this direction. Nevertheless, they often fail when the estimated input has a different temporal pattern from the true one or when a storm is not detected by the raingauge. In this study we propose a more realistic input error model, which is able to overcome these challenges and increase our certainty by better estimating model input and parameters. We formulate the average precipitation over the watershed as a stochastic input process (SIP). We suggest a transformed Gauss-Markov process, which is estimated in a Bayesian framework by using input (rainfall) and output (runoff) data. We tested the methodology in a 28.6 ha urban catchment represented by an accurate conceptual model. Specifically, we perform calibration and predictions with SIP and RM using accurate data from nearby raingauges (R1) and inaccurate data from a distant gauge (R2). Results show that using SIP, the estimated model parameters are "protected" from the corrupting impact of inaccurate rainfall. Additionally, SIP can correct input biases during calibration (Figure) and reliably quantify rainfall and runoff uncertainties during both calibration (Figure) and validation. In our real-word application with non-trivial rainfall errors, this was not the case with RM. We therefore recommend SIP in all cases where the input is the predominant source of uncertainty. Furthermore, the high-resolution rainfall intensities obtained with this

  17. New Claus catalyst tests accurately reflect process conditions

    SciTech Connect

    Maglio, A.; Schubert, P.F.

    1988-09-12

    Methods for testing Claus catalysts are developed that more accurately represent the actual operating conditions in commercial sulfur recovery units. For measuring catalyst activity, an aging method has been developed that results in more meaningful activity data after the catalyst has been aged, because all catalysts undergo rapid initial deactivation in commercial units. An activity test method has been developed where catalysts can be compared at less than equilibrium conversion. A test has also been developed to characterize abrasion loss of Claus catalysts, in contrast to the traditional method of determining physical properties by measuring crush strengths. Test results from a wide range of materials correlated well with actual pneumatic conveyance attrition. Substantial differences in Claus catalyst properties were observed as a result of using these tests.

  18. CLOMP: Accurately Characterizing OpenMP Application Overheads

    SciTech Connect

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B

    2008-02-11

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems.

  19. Accurate and reproducible determination of lignin molar mass by acetobromination.

    PubMed

    Asikkala, Janne; Tamminen, Tarja; Argyropoulos, Dimitris S

    2012-09-12

    The accurate and reproducible determination of lignin molar mass by using size exclusion chromatography (SEC) is challenging. The lignin association effects, known to dominate underivatized lignins, have been thoroughly addressed by reaction with acetyl bromide in an excess of glacial acetic acid. The combination of a concerted acetylation with the introduction of bromine within the lignin alkyl side chains is thought to be responsible for the observed excellent solubilization characteristics acetobromination imparts to a variety of lignin samples. The proposed methodology was compared and contrasted to traditional lignin derivatization methods. In addition, side reactions that could possibly be induced under the acetobromination conditions were explored with native softwood (milled wood lignin, MWL) and technical (kraft) lignin. These efforts lend support toward the use of room temperature acetobromination being a facile, effective, and universal lignin derivatization medium proposed to be employed prior to SEC measurements. PMID:22870925

  20. A simple polymeric model describes cell nuclear mechanical response

    NASA Astrophysics Data System (ADS)

    Banigan, Edward; Stephens, Andrew; Marko, John

    The cell nucleus must continually resist inter- and intracellular mechanical forces, and proper mechanical response is essential to basic cell biological functions as diverse as migration, differentiation, and gene regulation. Experiments probing nuclear mechanics reveal that the nucleus stiffens under strain, leading to two characteristic regimes of force response. This behavior depends sensitively on the intermediate filament protein lamin A, which comprises the outer layer of the nucleus, and the properties of the chromatin interior. To understand these mechanics, we study a simulation model of a polymeric shell encapsulating a semiflexible polymer. This minimalistic model qualitatively captures the typical experimental nuclear force-extension relation and observed nuclear morphologies. Using a Flory-like theory, we explain the simulation results and mathematically estimate the force-extension relation. The model and experiments suggest that chromatin organization is a dominant contributor to nuclear mechanics, while the lamina protects cell nuclei from large deformations.

  1. Describing the Breakbone Fever: IDODEN, an Ontology for Dengue Fever

    PubMed Central

    Mitraka, Elvira; Topalis, Pantelis; Dritsou, Vicky; Dialynas, Emmanuel; Louis, Christos

    2015-01-01

    Background Ontologies represent powerful tools in information technology because they enhance interoperability and facilitate, among other things, the construction of optimized search engines. To address the need to expand the toolbox available for the control and prevention of vector-borne diseases we embarked on the construction of specific ontologies. We present here IDODEN, an ontology that describes dengue fever, one of the globally most important diseases that are transmitted by mosquitoes. Methodology/Principal Findings We constructed IDODEN using open source software, and modeled it on IDOMAL, the malaria ontology developed previously. IDODEN covers all aspects of dengue fever, such as disease biology, epidemiology and clinical features. Moreover, it covers all facets of dengue entomology. IDODEN, which is freely available, can now be used for the annotation of dengue-related data and, in addition to its use for modeling, it can be utilized for the construction of other dedicated IT tools such as decision support systems. Conclusions/Significance The availability of the dengue ontology will enable databases hosting dengue-associated data and decision-support systems for that disease to perform most efficiently and to link their own data to those stored in other independent repositories, in an architecture- and software-independent manner. PMID:25646954

  2. Describing functional diversity of brain regions and brain networks

    PubMed Central

    Anderson, Michael L.; Kinnison, Josh; Pessoa, Luiz

    2013-01-01

    Despite the general acceptance that functional specialization plays an important role in brain function, there is little consensus about its extent in the brain. We sought to advance the understanding of this question by employing a data-driven approach that capitalizes on the existence of large databases of neuroimaging data. We quantified the diversity of activation in brain regions as a way to characterize the degree of functional specialization. To do so, brain activations were classified in terms of task domains, such as vision, attention, and language, which determined a region’s functional fingerprint. We found that the degree of diversity varied considerably across the brain. We also quantified novel properties of regions and of networks that inform our understanding of several task-positive and task-negative networks described in the literature, including defining functional fingerprints for entire networks and measuring their functional assortativity, namely the degree to which they are composed of regions with similar functional fingerprints. Our results demonstrate that some brain networks exhibit strong assortativity, whereas other networks consist of relatively heterogeneous parts. In sum, rather than characterizing the contributions of individual brain regions using task-based functional attributions, we instead quantified their dispositional tendencies, and related those to each region’s affiliative properties in both task-positive and task-negative contexts. PMID:23396162

  3. Describing current and potential markets for alternative-fuel vehicles

    SciTech Connect

    1996-03-26

    Motor vehicles are a major source of greenhouse gases, and the rising numbers of motor vehicles and miles driven could lead to more harmful emissions that may ultimately affect the world`s climate. One approach to curtailing such emissions is to use, instead of gasoline, alternative fuels: LPG, compressed natural gas, or alcohol fuels. In addition to the greenhouse gases, pollutants can be harmful to human health: ozone, CO. The Clean Air Act Amendments of 1990 authorized EPA to set National Ambient Air Quality Standards to control this. The Energy Policy Act of 1992 (EPACT) was the first new law to emphasize strengthened energy security and decreased reliance on foreign oil since the oil shortages of the 1970`s. EPACT emphasized increasing the number of alternative-fuel vehicles (AFV`s) by mandating their incremental increase of use by Federal, state, and alternative fuel provider fleets over the new few years. Its goals are far from being met; alternative fuels` share remains trivial, about 0.3%, despite gains. This report describes current and potential markets for AFV`s; it begins by assessing the total vehicle stock, and then it focuses on current use of AFV`s in alternative fuel provider fleets and the potential for use of AFV`s in US households.

  4. Describing Changes in Undergraduate Students' Preconceptions of Research Activities

    NASA Astrophysics Data System (ADS)

    Cartrette, David P.; Melroe-Lehrman, Bethany M.

    2012-12-01

    Research has shown that students bring naïve scientific conceptions to learning situations which are often incongruous with accepted scientific explanations. These preconceptions are frequently determined to be misconceptions; consequentially instructors spend time to remedy these beliefs and bring students' understanding of scientific concepts to acceptable levels. It is reasonable to assume that students also maintain preconceptions about the processes of authentic scientific research and its associated activities. This study describes the most commonly held preconceptions of authentic research activities among students with little or no previous research experience. Seventeen undergraduate science majors who participated in a ten week research program discussed, at various times during the program, their preconceptions of research and how these ideas changed as a result of direct participation in authentic research activities. The preconceptions included the belief that authentic research is a solitary activity which most closely resembles the type of activity associated with laboratory courses in the undergraduate curriculum. Participants' views showed slight maturation over the research program; they came to understand that authentic research is a detail-oriented activity which is rarely successfully completed alone. These findings and their implications for the teaching and research communities are discussed in the article.

  5. Describing the geographic spread of dengue disease by traveling waves.

    PubMed

    Maidana, Norberto Aníbal; Yang, Hyun Mo

    2008-09-01

    Dengue is a human disease transmitted by the mosquito Aedes aegypti. For this reason geographical regions infested by this mosquito species are under the risk of dengue outbreaks. In this work, we propose a mathematical model to study the spatial dissemination of dengue using a system of partial differential reaction-diffusion equations. With respect to the human and mosquito populations, we take into account their respective subclasses of infected and uninfected individuals. The dynamics of the mosquito population considers only two subpopulations: the winged form (mature female mosquitoes), and an aquatic population (comprising eggs, larvae and pupae). We disregard the long-distance movement by transportation facilities, for which reason the diffusion is considered restricted only to the winged form. The human population is considered homogeneously distributed in space, in order to describe localized dengue dissemination during a short period of epidemics. The cross-infection is modeled by the law of mass action. A threshold value as a function of the model's parameters is obtained, which determines the rate of dengue dissemination and the risk of dengue outbreaks. Assuming that an area was previously colonized by the mosquitoes, the rate of disease dissemination is determined as a function of the model's parameters. This rate of dissemination of dengue disease is determined by applying the traveling wave solutions to the corresponding system of partial differential equations.

  6. Describing the essential elements of a professional practice structure.

    PubMed

    Mathews, Sue; Lankshear, Sara

    2003-01-01

    The proliferation of program management coupled with the Introduction of the Regulated Health Professions Act, prompted many healthcare organizations in Ontario to introduce professional practice models. In addition, the Magnet Hospitals research (Kramer and Schmalenberg 1988) identified the existence of a professional practice model as a key element for recruitment and retention of professional staff. Professional practice models were introduced to address issues of accountability, identity and overlapping scopes of practice as experienced by healthcare professionals and organizations across the continuum of care. The authors of this paper describe exploratory work done through the Professional Practice Network of Ontario to identify the essential elements of the "ideal" professional practice structure, key areas of challenge and strategies for adapting these elements into an organization. The paper presents a list of 16 essential elements of an ideal professional practice structure with a further discussion on four key areas consistently identified as areas of challenge. This paper is intended to report, not the findings of a formal research study, but rather the result of facilitated dialogue among professional practice leaders in Ontario. The information will be of interest to healthcare organizations across the continuum of care and to professional associations and academic institutions, as we all address the challenges of creating a quality work environment that supports and fosters excellence in professional practice.

  7. Jan Evangelista Purkynje (1787-1869): first to describe fingerprints.

    PubMed

    Grzybowski, Andrzej; Pietrzak, Krzysztof

    2015-01-01

    Fingerprints have been used for years as the accepted tool in criminology and for identification. The first system of classification of fingerprints was introduced by Jan Evangelista Purkynje (1787-1869), a Czech physiologist, in 1823. He divided the papillary lines into nine types, based on their geometric arrangement. This work, however, was not recognized internationally for many years. In 1858, Sir William Herschel (1833-1917) registered fingerprints for those signing documents at the Indian magistrate's office in Jungipoor. Henry Faulds (1843-1930) in 1880 proposed using ink for fingerprint determination and people identification, and Francis Galton (1822-1911) collected 8000 fingerprints and developed their classification based on the spirals, loops, and arches. In 1892, Juan Vucetich (1858-1925) created his own fingerprint identification system and proved that a woman was responsible for killing two of her sons. In 1896, a London police officer Edward Henry (1850-1931) expanded on earlier systems of classification and used papillary lines to identify criminals; it was his system that was adopted by the forensic world. The work of Jan Evangelista Purkynje (1787-1869) (Figure 1), who in 1823 was the first to describe in detail fingerprints, is almost forgotten. He also established their classification. The year 2013 marked the 190th anniversary of the publication of his work on this topic. Our contribution is an attempt to introduce the reader to this scientist and his discoveries in the field of fingerprint identification. PMID:25530005

  8. Conceptual hierarchical modeling to describe wetland plant community organization

    USGS Publications Warehouse

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  9. Folding superfunnel to describe cooperative folding of interacting proteins.

    PubMed

    Smeller, László

    2016-07-01

    This paper proposes a generalization of the well-known folding funnel concept of proteins. In the funnel model the polypeptide chain is treated as an individual object not interacting with other proteins. Since biological systems are considerably crowded, protein-protein interaction is a fundamental feature during the life cycle of proteins. The folding superfunnel proposed here describes the folding process of interacting proteins in various situations. The first example discussed is the folding of the freshly synthesized protein with the aid of chaperones. Another important aspect of protein-protein interactions is the folding of the recently characterized intrinsically disordered proteins, where binding to target proteins plays a crucial role in the completion of the folding process. The third scenario where the folding superfunnel is used is the formation of aggregates from destabilized proteins, which is an important factor in case of several conformational diseases. The folding superfunnel constructed here with the minimal assumption about the interaction potential explains all three cases mentioned above. Proteins 2016; 84:1009-1016. © 2016 Wiley Periodicals, Inc.

  10. Jan Evangelista Purkynje (1787-1869): first to describe fingerprints.

    PubMed

    Grzybowski, Andrzej; Pietrzak, Krzysztof

    2015-01-01

    Fingerprints have been used for years as the accepted tool in criminology and for identification. The first system of classification of fingerprints was introduced by Jan Evangelista Purkynje (1787-1869), a Czech physiologist, in 1823. He divided the papillary lines into nine types, based on their geometric arrangement. This work, however, was not recognized internationally for many years. In 1858, Sir William Herschel (1833-1917) registered fingerprints for those signing documents at the Indian magistrate's office in Jungipoor. Henry Faulds (1843-1930) in 1880 proposed using ink for fingerprint determination and people identification, and Francis Galton (1822-1911) collected 8000 fingerprints and developed their classification based on the spirals, loops, and arches. In 1892, Juan Vucetich (1858-1925) created his own fingerprint identification system and proved that a woman was responsible for killing two of her sons. In 1896, a London police officer Edward Henry (1850-1931) expanded on earlier systems of classification and used papillary lines to identify criminals; it was his system that was adopted by the forensic world. The work of Jan Evangelista Purkynje (1787-1869) (Figure 1), who in 1823 was the first to describe in detail fingerprints, is almost forgotten. He also established their classification. The year 2013 marked the 190th anniversary of the publication of his work on this topic. Our contribution is an attempt to introduce the reader to this scientist and his discoveries in the field of fingerprint identification.

  11. INCAS: an analytical model to describe displacement cascades

    NASA Astrophysics Data System (ADS)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  12. A hybrid model describing ion induced kinetic electron emission

    NASA Astrophysics Data System (ADS)

    Hanke, S.; Duvenbeck, A.; Heuser, C.; Weidtmann, B.; Wucher, A.

    2015-06-01

    We present a model to describe the kinetic internal and external electron emission from an ion bombarded metal target. The model is based upon a molecular dynamics treatment of the nuclear degree of freedom, the electronic system is assumed as a quasi-free electron gas characterized by its Fermi energy, electron temperature and a characteristic attenuation length. In a series of previous works we have employed this model, which includes the local kinetic excitation as well as the rapid spread of the generated excitation energy, in order to calculate internal and external electron emission yields within the framework of a Richardson-Dushman-like thermionic emission model. However, this kind of treatment turned out to fail in the realistic prediction of experimentally measured internal electron yields mainly due to the restriction of the treatment of electronic transport to a diffusive manner. Here, we propose a slightly modified approach additionally incorporating the contribution of hot electrons which are generated in the bulk material and undergo ballistic transport towards the emitting interface.

  13. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  14. Effective Temperatures of Selected Main-Sequence Stars with the Most Accurate Parameters

    NASA Astrophysics Data System (ADS)

    Soydugan, F.; Eker, Z.; Soydugan, E.; Bilir, S.; Gökçe, E. Y.; Steer, I.; Tüysüz, M.; Šenyüz, T.; Demircan, O.

    2015-07-01

    In this study we investigate the distributions of the properties of detached double-lined binaries (DBs) in the mass-luminosity, mass-radius, and mass-effective temperature diagrams. We have improved the classical mass-luminosity relation based on the database of DBs by Eker et al. (2014a). Based on the accurate observational data available to us we propose a method for improving the effective temperatures of eclipsing binaries with accurate mass and radius determinations.

  15. Accurate water maser positions from HOPS

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew J.; Purcell, Cormac R.; Longmore, Steven N.; Breen, Shari L.; Green, James A.; Harvey-Smith, Lisa; Jordan, Christopher H.; Macpherson, Christopher

    2014-08-01

    We report on high spatial resolution water maser observations, using the Australia Telescope Compact Array, towards water maser sites previously identified in the H2O southern Galactic Plane Survey (HOPS). Of the 540 masers identified in the single-dish observations of Walsh et al., we detect emission in all but 31 fields. We report on 2790 spectral features (maser spots), with brightnesses ranging from 0.06 to 576 Jy and with velocities ranging from -238.5 to +300.5 km s-1. These spectral features are grouped into 631 maser sites. We have compared the positions of these sites to the literature to associate the sites with astrophysical objects. We identify 433 (69 per cent) with star formation, 121 (19 per cent) with evolved stars and 77 (12 per cent) as unknown. We find that maser sites associated with evolved stars tend to have more maser spots and have smaller angular sizes than those associated with star formation. We present evidence that maser sites associated with evolved stars show an increased likelihood of having a velocity range between 15 and 35 km s-1 compared to other maser sites. Of the 31 non-detections, we conclude they were not detected due to intrinsic variability and confirm previous results showing that such variable masers tend to be weaker and have simpler spectra with fewer peaks.

  16. The historical pathway towards more accurate homogenisation

    NASA Astrophysics Data System (ADS)

    Domonkos, P.; Venema, V.; Auer, I.; Mestre, O.; Brunetti, M.

    2012-03-01

    In recent years increasing effort has been devoted to objectively evaluate the efficiency of homogenisation methods for climate data; an important effort was the blind benchmarking performed in the COST Action HOME (ES0601). The statistical characteristics of the examined series have significant impact on the measured efficiencies, thus it is difficult to obtain an unambiguous picture of the efficiencies, relying only on numerical tests. In this study the historical methodological development with focus on the homogenisation of surface temperature observations is presented in order to view the progress from the side of the development of statistical tools. The main stages of this methodological progress, such as for instance the fitting optimal step-functions when the number of change-points is known (1972), cutting algorithm (1995), Caussinus - Lyazrhi criterion (1997), are recalled and their effects on the quality-improvement of homogenisation is briefly discussed. This analysis of the theoretical properties together with the recently published numerical results jointly indicate that, MASH, PRODIGE, ACMANT and USHCN are the best statistical tools for homogenising climatic time series, since they provide the reconstruction and preservation of true climatic variability in observational time series with the highest reliability. On the other hand, skilled homogenizers may achieve outstanding reliability also with the combination of simple statistical methods such as the Craddock-test and visual expert decisions. A few efficiency results of the COST HOME experiments are presented to demonstrate the performance of the best homogenisation methods.

  17. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    SciTech Connect

    Przybylski, D.; Shelyag, S.; Cally, P. S.

    2015-07-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave.

  18. Quantitative metrics that describe river deltas and their channel networks

    NASA Astrophysics Data System (ADS)

    Edmonds, Douglas A.; Paola, Chris; Hoyal, David C. J. D.; Sheets, Ben A.

    2011-12-01

    Densely populated river deltas are losing land at an alarming rate and to successfully restore these environments we must understand the details of their morphology. Toward this end we present a set of five metrics that describe delta morphology: (1) the fractal dimension, (2) the distribution of island sizes, (3) the nearest-edge distance, (4) a synthetic distribution of sediment fluxes at the shoreline, and (5) the nourishment area. The nearest-edge distance is the shortest distance to channelized or unchannelized water from a given location on the delta and is analogous to the inverse of drainage density in tributary networks. The nourishment area is the downstream delta area supplied by the sediment coming through a given channel cross section and is analogous to catchment area in tributary networks. As a first step, we apply these metrics to four relatively simple, fluvially dominated delta networks. For all these deltas, the average nearest-edge distances are remarkably constant moving down delta suggesting that the network organizes itself to maintain a consistent distance to the nearest channel. Nourishment area distributions can be predicted from a river mouth bar model of delta growth, and also scale with the width of the channel and with the length of the longest channel, analogous to Hack's law for drainage basins. The four delta channel networks are fractal, but power laws and scale invariance appear to be less pervasive than in tributary networks. Thus, deltas may occupy an advantageous middle ground between complete similarity and complete dissimilarity, where morphologic differences indicate different behavior.

  19. Sensitivity analysis approach to multibody systems described by natural coordinates

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2014-03-01

    The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.

  20. Autopathography and depression: describing the 'despair beyond despair'.

    PubMed

    Moran, Stephen T

    2006-01-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, emphasizes diagnosis and statistically significant commonalities in mental disorders. As stated in the Introduction, "[i]t must be admitted that no definition adequately specifies precise boundaries for the concept of 'mental disorder' " (DSM-IV, 1994, xxi). Further, "[t]he clinician using DSM-IV should ... consider that individuals sharing a diagnosis are likely to be heterogeneous, even in regard to the defining features of the diagnosis, and that boundary cases will be difficult to diagnose in any but a probabilistic fashion" (DSM-IV, 1994, xxii). This article proposes that it may be helpful for clinicians to study narratives of illness which emphasize this heterogeneity over statistically significant symptoms. This paper examines the recorded experiences of unusually articulate sufferers of the disorder classified as Major Depression. Although sharing a diagnosis, Hemingway, Fitzgerald, and Styron demonstrated different understandings of their illness and its symptoms and experienced different resolutions, which may have had something to do with the differing meanings they made of it. I have proposed a word, autopathography, to describe a type of literature in which the author's illness is the primary lens through which the narrative is filtered. This word is an augmentation of an existing word, pathography, which The Oxford English Dictionary, Second Edition, defines as "a) [t]he, or a, description of a disease," and "b) [t]he, or a, study of the life and character of an individual or community as influenced by a disease." The second definition is the one that I find relevant and which I feel may be helpful to clinicians in broadening their understanding of the patient's experience.

  1. Autopathography and depression: describing the 'despair beyond despair'.

    PubMed

    Moran, Stephen T

    2006-01-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, emphasizes diagnosis and statistically significant commonalities in mental disorders. As stated in the Introduction, "[i]t must be admitted that no definition adequately specifies precise boundaries for the concept of 'mental disorder' " (DSM-IV, 1994, xxi). Further, "[t]he clinician using DSM-IV should ... consider that individuals sharing a diagnosis are likely to be heterogeneous, even in regard to the defining features of the diagnosis, and that boundary cases will be difficult to diagnose in any but a probabilistic fashion" (DSM-IV, 1994, xxii). This article proposes that it may be helpful for clinicians to study narratives of illness which emphasize this heterogeneity over statistically significant symptoms. This paper examines the recorded experiences of unusually articulate sufferers of the disorder classified as Major Depression. Although sharing a diagnosis, Hemingway, Fitzgerald, and Styron demonstrated different understandings of their illness and its symptoms and experienced different resolutions, which may have had something to do with the differing meanings they made of it. I have proposed a word, autopathography, to describe a type of literature in which the author's illness is the primary lens through which the narrative is filtered. This word is an augmentation of an existing word, pathography, which The Oxford English Dictionary, Second Edition, defines as "a) [t]he, or a, description of a disease," and "b) [t]he, or a, study of the life and character of an individual or community as influenced by a disease." The second definition is the one that I find relevant and which I feel may be helpful to clinicians in broadening their understanding of the patient's experience. PMID:16721676

  2. Probabilistic models to describe the dynamics of migrating microbial communities.

    PubMed

    Schroeder, Joanna L; Lunn, Mary; Pinto, Ameet J; Raskin, Lutgarde; Sloan, William T

    2015-01-01

    In all but the most sterile environments bacteria will reside in fluid being transported through conduits and some of these will attach and grow as biofilms on the conduit walls. The concentration and diversity of bacteria in the fluid at the point of delivery will be a mix of those when it entered the conduit and those that have become entrained into the flow due to seeding from biofilms. Examples include fluids through conduits such as drinking water pipe networks, endotracheal tubes, catheters and ventilation systems. Here we present two probabilistic models to describe changes in the composition of bulk fluid microbial communities as they are transported through a conduit whilst exposed to biofilm communities. The first (discrete) model simulates absolute numbers of individual cells, whereas the other (continuous) model simulates the relative abundance of taxa in the bulk fluid. The discrete model is founded on a birth-death process whereby the community changes one individual at a time and the numbers of cells in the system can vary. The continuous model is a stochastic differential equation derived from the discrete model and can also accommodate changes in the carrying capacity of the bulk fluid. These models provide a novel Lagrangian framework to investigate and predict the dynamics of migrating microbial communities. In this paper we compare the two models, discuss their merits, possible applications and present simulation results in the context of drinking water distribution systems. Our results provide novel insight into the effects of stochastic dynamics on the composition of non-stationary microbial communities that are exposed to biofilms and provides a new avenue for modelling microbial dynamics in systems where fluids are being transported.

  3. Probabilistic models to describe the dynamics of migrating microbial communities.

    PubMed

    Schroeder, Joanna L; Lunn, Mary; Pinto, Ameet J; Raskin, Lutgarde; Sloan, William T

    2015-01-01

    In all but the most sterile environments bacteria will reside in fluid being transported through conduits and some of these will attach and grow as biofilms on the conduit walls. The concentration and diversity of bacteria in the fluid at the point of delivery will be a mix of those when it entered the conduit and those that have become entrained into the flow due to seeding from biofilms. Examples include fluids through conduits such as drinking water pipe networks, endotracheal tubes, catheters and ventilation systems. Here we present two probabilistic models to describe changes in the composition of bulk fluid microbial communities as they are transported through a conduit whilst exposed to biofilm communities. The first (discrete) model simulates absolute numbers of individual cells, whereas the other (continuous) model simulates the relative abundance of taxa in the bulk fluid. The discrete model is founded on a birth-death process whereby the community changes one individual at a time and the numbers of cells in the system can vary. The continuous model is a stochastic differential equation derived from the discrete model and can also accommodate changes in the carrying capacity of the bulk fluid. These models provide a novel Lagrangian framework to investigate and predict the dynamics of migrating microbial communities. In this paper we compare the two models, discuss their merits, possible applications and present simulation results in the context of drinking water distribution systems. Our results provide novel insight into the effects of stochastic dynamics on the composition of non-stationary microbial communities that are exposed to biofilms and provides a new avenue for modelling microbial dynamics in systems where fluids are being transported. PMID:25803866

  4. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...

  5. A unique, accurate LWIR optics measurement system

    NASA Astrophysics Data System (ADS)

    Fantone, Stephen D.; Orband, Daniel G.

    2011-05-01

    A compact low-cost LWIR test station has been developed that provides real time MTF testing of IR optical systems and EO imaging systems. The test station is intended to be operated by a technician and can be used to measure the focal length, blur spot size, distortion, and other metrics of system performance. The challenges and tradeoffs incorporated into this instrumentation will be presented. The test station performs the measurement of an IR lens or optical system's first order quantities (focal length, back focal length) including on and off-axis imaging performance (e.g., MTF, resolution, spot size) under actual test conditions to enable the simulation of their actual use. Also described is the method of attaining the needed accuracies so that derived calculations like focal length (EFL = image shift/tan(theta)) can be performed to the requisite accuracy. The station incorporates a patented video capture technology and measures MTF and blur characteristics using newly available lowcost LWIR cameras. This allows real time determination of the optical system performance enabling faster measurements, higher throughput and lower cost results than scanning systems. Multiple spectral filters are also accommodated within the test stations which facilitate performance evaluation under various spectral conditions.

  6. VDA provides accurate liquid fallout data

    SciTech Connect

    Clark, D.K.

    1996-03-01

    This article describes how a video droplet analyzer helped engineers determine that a liquid collection system would be necessary for conversion to wet stack operation. Engineers from the Los Angeles Department of Water and Power (LADWP) determined that the Intermountain Generating Station (IGS) in Delta, Utah, could save $26.8 million in fuel and maintenance costs by converting the plant from stack gas reheat (SGR) to wet stack operation. The SGR system was absorbing approximately 1% of the plant`s output, and excessive SGR-bundle corrosion led to frequent bundle replacements, reliability concerns and iron carryover into the boiler feed water. In addition to concerns about modifications to the operating permit and corrosion of downstream equipment, LADWP engineers realized that converting to wet stack operation would increase the potential for liquid fallout in the stack/duct system. This could have meant an additional $1 million in conversion costs depending on the amount of stack modification necessary to control liquid fallout. Before moving ahead with the project, LADWP engineers used a video droplet analyzer to quantify liquid fallout.

  7. Statistics of topography : multifractal approach to describe planetary topography

    NASA Astrophysics Data System (ADS)

    Landais, Francois; Schmidt, Frédéric; Lovejoy, Shaun

    2016-04-01

    In the last decades, a huge amount of topographic data has been obtained by several techniques (laser and radar altimetry, DTM…) for different bodies in the solar system. In each case, topographic fields exhibit an extremely high variability with details at each scale, from millimeters to thousands of kilometers. In our study, we investigate the statistical properties of the topography. Our statistical approach is motivated by the well known scaling behavior of topography that has been widely studied in the past. Indeed, scaling laws are strongly present in geophysical field and can be studied using fractal formalism. More precisely, we expect multifractal behavior in global topographic fields. This behavior reflects the high variability and intermittency observed in topographic fields that can not be generated by simple scaling models. In the multifractal formalism, each statistical moment exhibits a different scaling law characterized by a function called the moment scaling function. Previous studies were conducted at regional scale to demonstrate that topography present multifractal statistics (Gagnon et al., 2006, NPG). We have obtained similar results on Mars (Landais et al. 2015) and more recently on different body in the the solar system including the Moon, Venus and Mercury. We present the result of different multifractal approaches performed on global and regional basis and compare the fractal parameters from a body to another.

  8. Optimality approaches to describe characteristic fluvial patterns on landscapes

    PubMed Central

    Paik, Kyungrock; Kumar, Praveen

    2010-01-01

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257

  9. Describing Directional Cell Migration with a Characteristic Directionality Time

    PubMed Central

    Loosley, Alex J.; O’Brien, Xian M.; Reichner, Jonathan S.; Tang, Jay X.

    2015-01-01

    Many cell types can bias their direction of locomotion by coupling to external cues. Characteristics such as how fast a cell migrates and the directedness of its migration path can be quantified to provide metrics that determine which biochemical and biomechanical factors affect directional cell migration, and by how much. To be useful, these metrics must be reproducible from one experimental setting to another. However, most are not reproducible because their numerical values depend on technical parameters like sampling interval and measurement error. To address the need for a reproducible metric, we analytically derive a metric called directionality time, the minimum observation time required to identify motion as directionally biased. We show that the corresponding fit function is applicable to a variety of ergodic, directionally biased motions. A motion is ergodic when the underlying dynamical properties such as speed or directional bias do not change over time. Measuring the directionality of nonergodic motion is less straightforward but we also show how this class of motion can be analyzed. Simulations are used to show the robustness of directionality time measurements and its decoupling from measurement errors. As a practical example, we demonstrate the measurement of directionality time, step-by-step, on noisy, nonergodic trajectories of chemotactic neutrophils. Because of its inherent generality, directionality time ought to be useful for characterizing a broad range of motions including intracellular transport, cell motility, and animal migration. PMID:25992908

  10. Expressive writing difficulties in children described as exhibiting ADHD symptoms.

    PubMed

    Re, Anna Maria; Pedron, Martina; Cornoldi, Cesare

    2007-01-01

    Three groups of children of different ages who were considered by their teachers as showing symptoms of attention-deficit/hyperactivity disorder (ADHD) and matched controls were tested in a series of expressive writing tasks, derived from a standardized writing test. In the first study, 24 sixth- and seventh-grade children with ADHD symptoms wrote a description of an image. The ADHD group's expressive writing was worse than that of the control group and associated with a higher number of errors, mainly concerning accents and geminates. The second study showed the generality of the effect by testing younger groups of children with ADHD symptoms and controls with another description task where a verbal description was substituted for the picture stimulus. The third study extended the previous observations with another type of writing task, the request of writing a narrative text. In all the three studies, children with ADHD symptoms scored lower than controls on four qualitative parameters (adequacy, structure, grammar, and lexicon), produced shorter texts, and made more errors. These studies show that children with ADHD symptoms have school difficulties also in writing-both in spelling and expression-and that these difficulties are extended to different tasks and ages.

  11. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    NASA Astrophysics Data System (ADS)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  12. Noise reduction for modal parameters estimation using algorithm of solving partially described inverse singular value problem

    NASA Astrophysics Data System (ADS)

    Bao, Xingxian; Cao, Aixia; Zhang, Jing

    2016-07-01

    Modal parameters estimation plays an important role for structural health monitoring. Accurately estimating the modal parameters of structures is more challenging as the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of solving the partially described inverse singular value problem (PDISVP) combined with the complex exponential (CE) method to estimate the modal parameters. The PDISVP solving method is to reconstruct an L2-norm optimized (filtered) data matrix from the measured (noisy) data matrix, when the prescribed data constraints are one or several sets of singular triplets of the matrix. The measured data matrix is Hankel structured, which is constructed based on the measured impulse response function (IRF). The reconstructed matrix must maintain the Hankel structure, and be lowered in rank as well. Once the filtered IRF is obtained, the CE method can be applied to extract the modal parameters. Two physical experiments, including a steel cantilever beam with 10 accelerometers mounted, and a steel plate with 30 accelerometers mounted, excited by an impulsive load, respectively, are investigated to test the applicability of the proposed scheme. In addition, the consistency diagram is proposed to exam the agreement among the modal parameters estimated from those different accelerometers. Results indicate that the PDISVP-CE method can significantly remove noise from measured signals and accurately estimate the modal frequencies and damping ratios.

  13. Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure

    PubMed Central

    Skvortsova, Elena B.; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification

  14. Scattering and diffraction described using the momentum representation.

    PubMed

    Wennerström, Håkan

    2014-03-01

    We present a unified analysis of the scattering and diffraction of neutrons and photons using momentum representation in a full quantum description. The scattering event is consistently seen as a transfer of momentum between the target and the probing particles. For an elastic scattering process the observed scattering pattern primarily provides information on the momentum distribution for the particles in the target that cause the scattering. Structural information then follows from the Fourier transform relation between momentum and positional state functions. This description is common to the scattering of neutrons, X-ray photons and photons of light. In the quantum description of the interaction between light and the electrons of the target the scattering of X-rays is dominated by the first order contribution from the vector potential squared. The interaction with the electron is local and there is a close analogy, evident from the explicit quantitative expressions, with the neutron scattering case where the nucleus-neutron interaction is fully local from a molecular perspective. For light scattering, on the other hand, the dominant contribution to the scattering comes from a second order term linear in the vector potential. Thus the scattering of light involves correlations between electrons at different positions giving a conceptual explanation of the qualitative difference between the scattering of high and low energy photons. However, at energies close to resonance conditions the scattering of high energy photons is also affected by the second order term which results in a so called anomalous X-ray scattering/diffraction. It is also shown that using the momentum representation the phenomenon of diffraction is a direct consequence of the fact that for a system with periodic symmetry like a crystal the momentum distribution is quantized, which follows from Bloch's theorem. The momentum transfer to a probing particle is then also quantized resulting in a

  15. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  16. Identifying and Describing Tutor Archetypes: The Pragmatist, the Architect, and the Surveyor

    ERIC Educational Resources Information Center

    Harootunian, Jeff A.; Quinn, Robert J.

    2008-01-01

    In this article, the authors identify and anecdotally describe three tutor archetypes: the pragmatist, the architect, and the surveyor. These descriptions, based on observations of remedial mathematics tutors at a land-grant university, shed light on a variety of philosophical beliefs regarding and pedagogical approaches to tutoring. An analysis…

  17. Can an Ising-like cluster expansion describe atomic relaxations in alloys?

    NASA Astrophysics Data System (ADS)

    Zunger, Alex; Wolverton, C.

    1996-03-01

    Ising-like lattice models are often described as ``fixed lattice'' models, incapable of describing the effects of structural relaxation. However, we have recently demonstrated^1 the ability of generalized k-space^2 Ising-like cluster expansions to describe the energetics and thermodynamics associated with large atomic displacements in alloys. Although the expansion is constructed only from the energies of a few (small-unit-cell) ordered structures, it provides accurate predictions of the atomically-relaxed energies of random, ordered, or partially ordered alloys, as compared with direct, large scale ( ~1000 atom) energy-minimizing simulations. Moreover, unlike molecular dynamics, here relaxed energies are obtained without having to compute relaxed geometries. Combination of the cluster expansion with Monte Carlo calculations is shown to provide a far more efficient means for calculating thermodynamic properties than explicit molecular dynamics or other structural minimization methods. [1] C. Wolverton and A. Zunger, Phys. Rev. Lett. 75, 3162 (1995). [2] D. B. Laks, L. G. Ferreira, S. Froyen, and A. Zunger, Phys. Rev. B 46, 12587 (1992). Supported by BES/OER/DMS under contract DE-AC36-83CH10093.

  18. Finite volume approach for the instationary Cosserat rod model describing the spinning of viscous jets

    NASA Astrophysics Data System (ADS)

    Arne, Walter; Marheineke, Nicole; Meister, Andreas; Schiessl, Stefan; Wegener, Raimund

    2015-08-01

    The spinning of slender viscous jets can be asymptotically described by one-dimensional models that consist of systems of partial and ordinary differential equations. Whereas well-established string models only possess solutions for certain choices of parameters and configurations, the more sophisticated rod model is not limited by restrictions. It can be considered as an ɛ-regularized string model, but containing the slenderness ratio ɛ in the equations complicates its numerical treatment. We develop numerical schemes for fixed or enlarging (time-dependent) domains, using a finite volume approach in space with mixed central, up- and down-winded differences and stiffly accurate Radau methods for the time integration. For the first time, results of instationary simulations for a fixed or growing jet in a rotational spinning process are presented for arbitrary parameter ranges.

  19. ATOMIC AND MOLECULAR PHYSICS: Splitting of Spectra in Anharmonic Oscillators Described by Kratzer Potential Function

    NASA Astrophysics Data System (ADS)

    Petreska, Irina; Sandev, Trifce; Ivanovski, Gjorgji; Pejov, Ljupco

    2010-07-01

    A perturbation theory model that describes splitting of the spectra in highly symmetrical molecular species in electrostatic field is proposed. An anahrmonic model of a two-dimensional oscillator having Kratzer potential energy function is used to model the molecular species and to represent the unperturbed system. A selection rule for the radial quantum number of the oscillator is derived. The eigenfunctions of a two-dimensional anharmonic oscillator in cylindrical coordinates are used for the matrix elements representing the probability for energy transitions in dipole approximation to be calculated. Several forms of perturbation operators are proposed to model the interaction between the polyatomic molecular species and an electrostatic field. It is found that the degeneracy is removed in the presence of the electric field and spectral splitting occurs. Anharmonic approximation for the unperturbed system is more accurate and reliable representation of a real polyatomic molecular species.

  20. Fast and accurate generation of ab initio quality atomic charges using nonparametric statistical regression.

    PubMed

    Rai, Brajesh K; Bakken, Gregory A

    2013-07-15

    We introduce a class of partial atomic charge assignment method that provides ab initio quality description of the electrostatics of bioorganic molecules. The method uses a set of models that neither have a fixed functional form nor require a fixed set of parameters, and therefore are capable of capturing the complexities of the charge distribution in great detail. Random Forest regression is used to build separate charge models for elements H, C, N, O, F, S, and Cl, using training data consisting of partial charges along with a description of their surrounding chemical environments; training set charges are generated by fitting to the b3lyp/6-31G* electrostatic potential (ESP) and are subsequently refined to improve consistency and transferability of the charge assignments. Using a set of 210 neutral, small organic molecules, the absolute hydration free energy calculated using these charges in conjunction with Generalized Born solvation model shows a low mean unsigned error, close to 1 kcal/mol, from the experimental data. Using another large and independent test set of chemically diverse organic molecules, the method is shown to accurately reproduce charge-dependent observables--ESP and dipole moment--from ab initio calculations. The method presented here automatically provides an estimate of potential errors in the charge assignment, enabling systematic improvement of these models using additional data. This work has implications not only for the future development of charge models but also in developing methods to describe many other chemical properties that require accurate representation of the electronic structure of the system.

  1. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  2. The Laboratory Parenting Assessment Battery: Development and Preliminary Validation of an Observational Parenting Rating System

    ERIC Educational Resources Information Center

    Wilson, Sylia; Durbin, C. Emily

    2012-01-01

    Investigations of contributors to and consequences of the parent-child relationship require accurate assessment of the nature and quality of parenting. The present study describes the development and psychometric evaluation of the Laboratory Parenting Assessment Battery (Lab-PAB), an observational rating system that assesses parenting behaviors…

  3. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  4. Aperture taper determination for the half-scale accurate antenna reflector

    NASA Technical Reports Server (NTRS)

    Lambert, Kevin M.

    1990-01-01

    A simulation is described of a proposed microwave reflectance measurement in which the half scale reflector is used in a compact range type of application. The simulation is used to determine an acceptable aperture taper for the reflector which will allow for accurate measurements. Information on the taper is used in the design of a feed for the reflector.

  5. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  6. A technique for managing and accurate registration of periimplant soft tissues.

    PubMed

    Ntounis, Athanasios; Petropoulou, Aikaterini

    2010-10-01

    This article describes an indirect impression technique that accurately captures the soft tissue contours around an implant-supported provisional restoration. Customized impression copings are used to transfer the soft tissue architecture created by the interim prosthesis. The definitive restoration is shaped like the provisional restoration, maintaining the emergence profile and optimizing esthetics.

  7. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  8. Detailed observations of the source of terrestrial narrowband electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.

    1982-01-01

    Detailed observations are presented of a region near the terrestrial plasmapause where narrowband electromagnetic radiation (previously called escaping nonthermal continuum radiation) is being generated. These observations show a direct correspondence between the narrowband radio emissions and electron cyclotron harmonic waves near the upper hybrid resonance frequency. In addition, electromagnetic radiation propagating in the Z-mode is observed in the source region which provides an extremely accurate determination of the electron plasma frequency and, hence, density profile of the source region. The data strongly suggest that electrostatic waves and not Cerenkov radiation are the source of the banded radio emissions and define the coupling which must be described by any viable theory.

  9. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    SciTech Connect

    Perley, R. A.; Butler, B. J. E-mail: BButler@nrao.edu

    2013-02-15

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  10. Very Fast and Accurate Azimuth Disambiguation of Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Rudenko, G. V.; Anfinogentov, S. A.

    2014-05-01

    We present a method for fast and accurate azimuth disambiguation of vector magnetogram data regardless of the location of the analyzed region on the solar disk. The direction of the transverse field is determined with the principle of minimum deviation of the field from the reference (potential) field. The new disambiguation (NDA) code is examined on the well-known models of Metcalf et al. ( Solar Phys. 237, 267, 2006) and Leka et al. ( Solar Phys. 260, 83, 2009), and on an artificial model based on the observed magnetic field of AR 10930 (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We compare Hinode/SOT-SP vector magnetograms of AR 10930 disambiguated with three codes: the NDA code, the nonpotential magnetic-field calculation (NPFC: Georgoulis, Astrophys. J. Lett. 629, L69, 2005), and the spherical minimum-energy method (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We then illustrate the performance of NDA on SDO/HMI full-disk magnetic-field observations. We show that our new algorithm is more than four times faster than the fastest algorithm that provides the disambiguation with a satisfactory accuracy (NPFC). At the same time, its accuracy is similar to that of the minimum-energy method (a very slow algorithm). In contrast to other codes, the NDA code maintains high accuracy when the region to be analyzed is very close to the limb.

  11. Accurate Delayed Matching-to-Sample Responding without Rehearsal: An Unintentional Demonstration with Children.

    PubMed

    Ratkos, Thom; Frieder, Jessica E; Poling, Alan

    2016-06-01

    Research on joint control has focused on mediational responses, in which simultaneous stimulus control from two sources leads to the emission of a single response, such as choosing a comparison stimulus in delayed matching-to-sample. Most recent studies of joint control examined the role of verbal mediators (i.e., rehearsal) in evoking accurate performance. They suggest that mediation is a necessity for accurate delayed matching-to-sample responding. We designed an experiment to establish covert rehearsal responses in young children. Before participants were taught such responses; however, we observed that they responded accurately at delays of 15 and 30 s without overt rehearsal. These findings suggest that in some cases, rehearsal is not necessary for accurate responding in such tasks. PMID:27606223

  12. Comparative evaluation of mathematical functions to describe growth and efficiency of phosphorus utilization in growing pigs.

    PubMed

    Kebreab, E; Schulin-Zeuthen, M; Lopez, S; Soler, J; Dias, R S; de Lange, C F M; France, J

    2007-10-01

    Success of pig production depends on maximizing return over feed costs and addressing potential nutrient pollution to the environment. Mathematical modeling has been used to describe many important aspects of inputs and outputs of pork production. This study was undertaken to compare 4 mathematical functions for the best fit in terms of describing specific data sets on pig growth and, in a separate experiment, to compare these 4 functions for describing of P utilization for growth. Two data sets with growth data were used to conduct growth analysis and another data set was used for P efficiency analysis. All data sets were constructed from independent trials that measured BW, age, and intake. Four growth functions representing diminishing returns (monomolecular), sigmoidal with a fixed point of inflection (Gompertz), and sigmoidal with a variable point of inflection (Richards and von Bertalanffy) were used. Meta-analysis of the data was conducted to identify the most appropriate functions for growth and P utilization. Based on Bayesian information criteria, the Richards equation described the BW vs. age data best. The additional parameter of the Richards equation was necessary because the data required a lower point of inflection (138 d) than the Gompertz, with a fixed point of inflexion at 1/e times the final BW (189 d), could accommodate. Lack of flexibility in the Gompertz equation was a limitation to accurate prediction. The monomolecular equation was best at determining efficiencies of P utilization for BW gain compared with the sigmoidal functions. The parameter estimate for the rate constant in all functions decreased as available P intake increased. Average efficiencies during different stages of growth were calculated and offer insight into targeting stages where high feed (nutrient) input is required and when adjustments are needed to accommodate the loss of efficiency and the reduction of potential pollution problems. It is recommended that the Richards

  13. Observation of the Earth's nutation by the VLBI: how accurate is the geophysical signal

    NASA Astrophysics Data System (ADS)

    Gattano, César; Lambert, Sébastien B.; Bizouard, Christian

    2016-09-01

    We compare nutation time series determined by several International VLBI Service for geodesy and astrometry (IVS) analysis centers. These series were made available through the International Earth Rotation and Reference Systems Service (IERS). We adjust the amplitudes of the main nutations, including the free motion associated with the free core nutation (FCN). Then, we discuss the results in terms of physics of the Earth's interior. We find consistent FCN signals in all of the time series, and we provide corrections to IAU 2000A series for a number of nutation terms with realistic errors. It appears that the analysis configuration or the software packages used by each analysis center introduce an error comparable to the amplitude of the prominent corrections. We show that the inconsistencies between series have significant consequences on our understanding of the Earth's deep interior, especially for the free inner core resonance: they induce an uncertainty on the FCN period of about 0.5 day, and on the free inner core nutation (FICN) period of more than 1000 days, comparable to the estimated period itself. Though the FCN parameters are not so much affected, a 100 % error shows up for the FICN parameters and prevents from geophysical conclusions.

  14. Accurate deterministic solutions for the classic Boltzmann shock profile

    NASA Astrophysics Data System (ADS)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  15. Challenges in accurate quantitation of lysophosphatidic acids in human biofluids

    PubMed Central

    Onorato, Joelle M.; Shipkova, Petia; Minnich, Anne; Aubry, Anne-Françoise; Easter, John; Tymiak, Adrienne

    2014-01-01

    Lysophosphatidic acids (LPAs) are biologically active signaling molecules involved in the regulation of many cellular processes and have been implicated as potential mediators of fibroblast recruitment to the pulmonary airspace, pointing to possible involvement of LPA in the pathology of pulmonary fibrosis. LPAs have been measured in various biological matrices and many challenges involved with their analyses have been documented. However, little published information is available describing LPA levels in human bronchoalveolar lavage fluid (BALF). We therefore conducted detailed investigations into the effects of extensive sample handling and sample preparation conditions on LPA levels in human BALF. Further, targeted lipid profiling of human BALF and plasma identified the most abundant lysophospholipids likely to interfere with LPA measurements. We present the findings from these investigations, highlighting the importance of well-controlled sample handling for the accurate quantitation of LPA. Further, we show that chromatographic separation of individual LPA species from their corresponding lysophospholipid species is critical to avoid reporting artificially elevated levels. The optimized sample preparation and LC/MS/MS method was qualified using a stable isotope-labeled LPA as a surrogate calibrant and used to determine LPA levels in human BALF and plasma from a Phase 0 clinical study comparing idiopathic pulmonary fibrosis patients to healthy controls. PMID:24872406

  16. Slim hole MWD tool accurately measures downhole annular pressure

    SciTech Connect

    Burban, B.; Delahaye, T. )

    1994-02-14

    Measurement-while-drilling of downhole pressure accurately determines annular pressure losses from circulation and drillstring rotation and helps monitor swab and surge pressures during tripping. In early 1993, two slim-hole wells (3.4 in. and 3 in. diameter) were drilled with continuous real-time electromagnetic wave transmission of downhole temperature and annular pressure. The data were obtained during all stages of the drilling operation and proved useful for operations personnel. The use of real-time measurements demonstrated the characteristic hydraulic effects of pressure surges induced by drillstring rotation in the small slim-hole annulus under field conditions. The interest in this information is not restricted to the slim-hole geometry. Monitoring or estimating downhole pressure is a key element for drilling operations. Except in special cases, no real-time measurements of downhole annular pressure during drilling and tripping have been used on an operational basis. The hydraulic effects are significant in conventional-geometry wells (3 1/2-in. drill pipe in a 6-in. hole). This paper describes the tool and the results from the field test.

  17. Accurate transition rates for intercombination lines of singly ionized nitrogen

    SciTech Connect

    Tayal, S. S.

    2011-01-15

    The transition energies and rates for the 2s{sup 2}2p{sup 2} {sup 3}P{sub 1,2}-2s2p{sup 3} {sup 5}S{sub 2}{sup o} and 2s{sup 2}2p3s-2s{sup 2}2p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p{sup 3} {sup 1,3}P{sub 1}{sup o} and 2s{sup 2}2p3s {sup 1,3}P{sub 1}{sup o}levels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  18. Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.

    PubMed

    Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong

    2015-09-01

    The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.

  19. ALOS-PALSAR multi-temporal observation for describing land use and forest cover changes in Malaysia

    NASA Astrophysics Data System (ADS)

    Avtar, R.; Suzuki, R.; Ishii, R.; Kobayashi, H.; Nagai, S.; Fadaei, H.; Hirata, R.; Suhaili, A. B.

    2012-12-01

    The establishment of plantations in carbon rich peatland of Southeast Asia has shown an increase in the past decade. The need to support development in countries such as Malaysia has been reflected by having a higher rate of conversion of its forested areas to agricultural land use in particular oilpalm plantation. Use of optical data to monitor changes in peatland forests is difficult because of the high cloudiness in tropical region. Synthetic Aperture Radar (SAR) based remote sensing can potentially be used to monitor changes in such forested landscapes. In this study, we have demonstrated the capability of multi-temporal Fine-Beam Dual (FBD) data of Phased Array L-band Synthetic Aperture Radar (PALSAR) to detect forest cover changes in peatland to other landuse such as oilpalm plantation. Here, the backscattering properties of radar were evaluated to estimate changes in the forest cover. Temporal analysis of PALSAR FBD data shows that conversion of peatland forest to oilpalm can be detected by analyzing changes in the value of σoHH and σoHV. This is characterized by a high value of σoHH (-7.89 dB) and σoHV (-12.13 dB) for areas under peat forests. The value of σoHV decreased about 2-4 dB due to the conversion of peatland to a plantation area. There is also an increase in the value of σoHH/σoHV. Changes in σoHV is more prominent to identify the peatland conversion than in the σoHH. The results indicate the potential of PALSAR to estimate peatland forest conversion based on thresholding of σoHV or σoHH/σoHV for monitoring changes in peatland forest. This would improve our understanding of the temporal change and its effect on the peatland forest ecosystem.

  20. Describing Myxococcus xanthus Aggregation Using Ostwald Ripening Equations for Thin Liquid Films

    PubMed Central

    Bahar, Fatmagül; Pratt-Szeliga, Philip C.; Angus, Stuart; Guo, Jiaye; Welch, Roy D.

    2014-01-01

    When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions. PMID:25231319

  1. Antiferromagnetic Heisenberg spin-1 chain: Magnetic susceptibility of the Haldane chain described using scaling

    NASA Astrophysics Data System (ADS)

    Souletie, Jean; Drillon, Marc; Rabu, Pierre; Pati, Swapan K.

    2004-08-01

    The phenomenological expression χT/(Ng2μB2/k)=C1nexp(-W1n/T)+C2nexp(-W2n/T) describes very accurately the temperature dependence of the magnetic susceptibility computed for antiferromagnetic rings of Heisenberg spins S=1 , whose size n is even and ranges from 6 to 20. This expression has been obtained through a strategy justified by scaling considerations together with finite size numerical calculations. For n large, the coefficients of the expression converge towards C1=0.125 , W1=0.451J , C2=0.564 , W2=1.793J ( J is the exchange constant), which are appropriate for describing the susceptibility of the spin-1 Haldane chain. The Curie constant, the paramagnetic Curie-Weiss temperature, the correlation length at T=0 and the Haldane gap are found to be closely related to these coefficients. With this expression, a very good description of the magnetic behavior of Y2BaNiO5 and of Ni(C2H8N2)2NO2ClO4 (NENP), the archetype of the Haldane gap systems, is achieved over the whole temperature range.

  2. Using the GVB Ansatz to develop ensemble DFT method for describing multiple strongly correlated electron pairs.

    PubMed

    Filatov, Michael; Martínez, Todd J; Kim, Kwang S

    2016-08-21

    Ensemble density functional theory (DFT) furnishes a rigorous theoretical framework for describing the non-dynamic electron correlation arising from (near) degeneracy of several electronic configurations. Ensemble DFT naturally leads to fractional occupation numbers (FONs) for several Kohn-Sham (KS) orbitals, which thereby become variational parameters of the methodology. The currently available implementation of ensemble DFT in the form of the spin-restricted ensemble-referenced KS (REKS) method was originally designed for systems with only two fractionally occupied KS orbitals, which was sufficient to accurately describe dissociation of a single chemical bond or the singlet ground state of biradicaloid species. To extend applicability of the method to systems with several dissociating bonds or to polyradical species, more fractionally occupied orbitals must be included in the ensemble description. Here we investigate a possibility of developing the extended REKS methodology with the help of the generalized valence bond (GVB) wavefunction theory. The use of GVB enables one to derive a simple and physically transparent energy expression depending explicitly on the FONs of several KS orbitals. In this way, a version of the REKS method with four electrons in four fractionally occupied orbitals is derived and its accuracy in the calculation of various types of strongly correlated molecules is investigated. We propose a possible scheme to ameliorate the partial size-inconsistency that results from perfect spin-pairing. We conjecture that perfect pairing natural orbital (NO) functionals of reduced density matrix functional theory (RDMFT) should also display partial size-inconsistency. PMID:26947515

  3. Case series describing an outbreak of highly resistant vancomycin Staphylococcus aureus (possible VISA/VRSA) infections in orthopedic related procedures in Guatemala.

    PubMed

    Antony, Suresh J

    2014-01-01

    This is a case series describing an outbreak of VRSA/VISA associated infections in orthopedic related procedures that occurred on a medical mission trip in Antigua, Guatemala. The paper describes the clinical features, microbiology and treatment options available to treat such infections in a Third World country. It also highlights the difficulty in making an accurate diagnosis with suboptimal microbiological support.

  4. Using Artifacts to Describe Instruction: Lessons Learned from Studying Reform-Oriented Instruction in Middle School Mathematics and Science. CSE Technical Report 705

    ERIC Educational Resources Information Center

    Borko, Hilda; Kuffner, Karin L.; Arnold, Suzanne C.; Creighton, Laura; Stecher, Brian M.; Martinez, Felipe; Barnes, Dionne; Gilbert, Mary Lou

    2007-01-01

    It is important to be able to describe instructional practices accurately in order to support research on "what works" in education and professional development as a basis for efforts to improve practice. This report describes a project to develop procedures for characterizing classroom practices in mathematics and science on the basis of…

  5. Accurate description of calcium solvation in concentrated aqueous solutions.

    PubMed

    Kohagen, Miriam; Mason, Philip E; Jungwirth, Pavel

    2014-07-17

    Calcium is one of the biologically most important ions; however, its accurate description by classical molecular dynamics simulations is complicated by strong electrostatic and polarization interactions with surroundings due to its divalent nature. Here, we explore the recently suggested approach for effectively accounting for polarization effects via ionic charge rescaling and develop a new and accurate parametrization of the calcium dication. Comparison to neutron scattering and viscosity measurements demonstrates that our model allows for an accurate description of concentrated aqueous calcium chloride solutions. The present model should find broad use in efficient and accurate modeling of calcium in aqueous environments, such as those encountered in biological and technological applications.

  6. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  7. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    NASA Astrophysics Data System (ADS)

    Granata, Daniele; Carnevale, Vincenzo

    2016-08-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.

  8. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  9. A colorimetric-based accurate method for the determination of enterovirus 71 titer.

    PubMed

    Pourianfar, Hamid Reza; Javadi, Arman; Grollo, Lara

    2012-12-01

    The 50 % tissue culture infectious dose (TCID50) is still one of the most commonly used techniques for estimating virus titers. However, the traditional TCID50 assay is time consuming, susceptible to subjective errors and generates only quantal data. Here, we describe a colorimetric-based approach for the titration of Enterovirus 71 (EV71) using a modified method for making virus dilutions. In summary, the titration of EV71 using MTT or MTS staining with a modified virus dilution method decreased the time of the assay and eliminated the subjectivity of observational results, improving accuracy, reproducibility and reliability of virus titration, in comparison with the conventional TCID50 approach (p < 0.01). In addition, the results provided evidence that there was better correlation between a plaquing assay and our approach when compared to the traditional TCID50 approach. This increased accuracy also improved the ability to predict the number of virus plaque forming units present in a solution. These improvements could be of use for any virological experimentation, where a quick accurate titration of a virus capable of causing cell destruction is required or a sensible estimation of the number of viral plaques based on TCID50 of a virus is desired.

  10. Accurate 3D reconstruction of complex blood vessel geometries from intravascular ultrasound images: in vitro study.

    PubMed

    Subramanian, K R; Thubrikar, M J; Fowler, B; Mostafavi, M T; Funk, M W

    2000-01-01

    We present a technique that accurately reconstructs complex three dimensional blood vessel geometry from 2D intravascular ultrasound (IVUS) images. Biplane x-ray fluoroscopy is used to image the ultrasound catheter tip at a few key points along its path as the catheter is pulled through the blood vessel. An interpolating spline describes the continuous catheter path. The IVUS images are located orthogonal to the path, resulting in a non-uniform structured scalar volume of echo densities. Isocontour surfaces are used to view the vessel geometry, while transparency and clipping enable interactive exploration of interior structures. The two geometries studied are a bovine artery vascular graft having U-shape and a constriction, and a canine carotid artery having multiple branches and a constriction. Accuracy of the reconstructions is established by comparing the reconstructions to (1) silicone moulds of the vessel interior, (2) biplane x-ray images, and (3) the original echo images. Excellent shape and geometry correspondence was observed in both geometries. Quantitative measurements made at key locations of the 3D reconstructions also were in good agreement with those made in silicone moulds. The proposed technique is easily adoptable in clinical practice, since it uses x-rays with minimal exposure and existing IVUS technology. PMID:11105284

  11. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    PubMed

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  12. A particle-tracking approach for accurate material derivative measurements with tomographic PIV

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Scarano, Fulvio

    2013-08-01

    The evaluation of the instantaneous 3D pressure field from tomographic PIV data relies on the accurate estimate of the fluid velocity material derivative, i.e., the velocity time rate of change following a given fluid element. To date, techniques that reconstruct the fluid parcel trajectory from a time sequence of 3D velocity fields obtained with Tomo-PIV have already been introduced. However, an accurate evaluation of the fluid element acceleration requires trajectory reconstruction over a relatively long observation time, which reduces random errors. On the other hand, simple integration and finite difference techniques suffer from increasing truncation errors when complex trajectories need to be reconstructed over a long time interval. In principle, particle-tracking velocimetry techniques (3D-PTV) enable the accurate reconstruction of single particle trajectories over a long observation time. Nevertheless, PTV can be reliably performed only at limited particle image number density due to errors caused by overlapping particles. The particle image density can be substantially increased by use of tomographic PIV. In the present study, a technique to combine the higher information density of tomographic PIV and the accurate trajectory reconstruction of PTV is proposed (Tomo-3D-PTV). The particle-tracking algorithm is applied to the tracers detected in the 3D domain obtained by tomographic reconstruction. The 3D particle information is highly sparse and intersection of trajectories is virtually impossible. As a result, ambiguities in the particle path identification over subsequent recordings are easily avoided. Polynomial fitting functions are introduced that describe the particle position in time with sequences based on several recordings, leading to the reduction in truncation errors for complex trajectories. Moreover, the polynomial regression approach provides a reduction in the random errors due to the particle position measurement. Finally, the acceleration

  13. How Clean Are Hotel Rooms? Part I: Visual Observations vs. Microbiological Contamination.

    PubMed

    Almanza, Barbara A; Kirsch, Katie; Kline, Sheryl Fried; Sirsat, Sujata; Stroia, Olivia; Choi, Jin Kyung; Neal, Jay

    2015-01-01

    Current evidence of hotel room cleanliness is based on observation rather than empirically based microbial assessment. The purpose of the study described here was to determine if observation provides an accurate indicator of cleanliness. Results demonstrated that visual assessment did not accurately predict microbial contamination. Although testing standards have not yet been established for hotel rooms and will be evaluated in Part II of the authors' study, potential microbial hazards included the sponge and mop (housekeeping cart), toilet, bathroom floor, bathroom sink, and light switch. Hotel managers should increase cleaning in key areas to reduce guest exposure to harmful bacteria. PMID:26427262

  14. Accurate Free Vibration Analysis of the Completely Free Rectangular Mindlin Plate

    NASA Astrophysics Data System (ADS)

    Gorman, D. J.; Ding, Wei

    1996-01-01

    The superposition method is exploited to obtain accurate solutions for the natural frequencies and mode shapes of the completely free Mindlin plate. Computed eigenvalues are tabulated for a number of plate aspect and thickness ratios. Steps taken to avoid computational instability are described. Difficulties associated with choosing mode shape functions, particularly when free edges are involved, have always hindered researchers utilizing the Rayleigh-Ritz method. Such difficulties are obviated here. To the authors' knowledge, this represents the first accurate comprehensive solution to this important plate vibration problem.

  15. Accurate localization and echocardiographic-pathologic correlation of tricuspid valve angiolipoma by intraoperative transesophageal echocardiography.

    PubMed

    Misra, Satyajeet; Sinha, Prabhat K; Koshy, Thomas; Sandhyamani, Samavedam; Parija, Chandrabhanu; Gopal, Kirun

    2009-11-01

    Angiolipoma (angiolipohamartoma) of the tricuspid valve (TV) is a rare tumor which may be occasionally misdiagnosed as right atrial (RA) myxoma. Transesophageal echocardiography (TEE) provides accurate information regarding the size, shape, mobility as well as site of attachment of RA tumors and is a superior modality as compared to transthoracic echocardiography (TTE). Correct diagnosis of RA tumors has therapeutic significance and guides management of patients, as myxomas are generally more aggressively managed than lipomas. We describe a rare case of a pedunculated angiolipoma of the TV which was misdiagnosed as RA myxoma on TTE and discuss the echocardiographic-pathologic correlates of the tumor as well as its accurate localization by TEE.

  16. Expected IPS variations due to a disturbance described by a 3-D MHD model

    NASA Technical Reports Server (NTRS)

    Tappin, S. J.; Dryer, M.; Han, S. M.; Wu, S. T.

    1988-01-01

    The variations of interplanetary scintillation due to a disturbance described by a three-dimensional, time-dependent, MHD model of the interplanetary medium are calculated. The resulting simulated IPS maps are compared with observations of real disturbances and it is found that there is some qualitative agreement. It is concluded that the MHD model with a more realistic choice of input conditions would probably provide a useful description of many interplanetary disturbances.

  17. CLOMP: Accurately Characterizing OpenMP Application Overheads

    SciTech Connect

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B R

    2008-11-10

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC.We also show that CLOMP also captures limitations for OpenMP parallelization on SMT and NUMA systems. Finally, CLOMPI, our MPI extension of CLOMP, demonstrates which aspects of OpenMP interact poorly with MPI when MPI helper threads cannot run on the NIC.

  18. Novel Cortical Thickness Pattern for Accurate Detection of Alzheimer's Disease.

    PubMed

    Zheng, Weihao; Yao, Zhijun; Hu, Bin; Gao, Xiang; Cai, Hanshu; Moore, Philip

    2015-01-01

    Brain network occupies an important position in representing abnormalities in Alzheimer's disease (AD) and mild cognitive impairment (MCI). Currently, most studies only focused on morphological features of regions of interest without exploring the interregional alterations. In order to investigate the potential discriminative power of a morphological network in AD diagnosis and to provide supportive evidence on the feasibility of an individual structural network study, we propose a novel approach of extracting the correlative features from magnetic resonance imaging, which consists of a two-step approach for constructing an individual thickness network with low computational complexity. Firstly, multi-distance combination is utilized for accurate evaluation of between-region dissimilarity; and then the dissimilarity is transformed to connectivity via calculation of correlation function. An evaluation of the proposed approach has been conducted with 189 normal controls, 198 MCI subjects, and 163 AD patients using machine learning techniques. Results show that the observed correlative feature suggests significant promotion in classification performance compared with cortical thickness, with accuracy of 89.88% and area of 0.9588 under receiver operating characteristic curve. We further improved the performance by integrating both thickness and apolipoprotein E ɛ4 allele information with correlative features. New achieved accuracies are 92.11% and 79.37% in separating AD from normal controls and AD converters from non-converters, respectively. Differences between using diverse distance measurements and various correlation transformation functions are also discussed to explore an optimal way for network establishment. PMID:26444768

  19. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  20. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  1. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  2. HOW ACCURATE IS OUR KNOWLEDGE OF THE GALAXY BIAS?

    SciTech Connect

    More, Surhud

    2011-11-01

    Observations of the clustering of galaxies can provide useful information about the distribution of dark matter in the universe. In order to extract accurate cosmological parameters from galaxy surveys, it is important to understand how the distribution of galaxies is biased with respect to the matter distribution. The large-scale bias of galaxies can be quantified either by directly measuring the large-scale ({lambda} {approx}> 60 h{sup -1} Mpc) power spectrum of galaxies or by modeling the halo occupation distribution of galaxies using their clustering on small scales ({lambda} {approx}< 30 h{sup -1} Mpc). We compare the luminosity dependence of the galaxy bias (both the shape and the normalization) obtained by these methods and check for consistency. Our comparison reveals that the bias of galaxies obtained by the small-scale clustering measurements is systematically larger than that obtained from the large-scale power spectrum methods. We also find systematic discrepancies in the shape of the galaxy-bias-luminosity relation. We comment on the origin and possible consequences of these discrepancies which had remained unnoticed thus far.

  3. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  4. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  5. On canonical cylinder sections for accurate determination of contact angle in microgravity

    SciTech Connect

    Concus, P.; Zabihi, F. California Univ., Berkeley, CA . Dept. of Mathematics); Finn, R. . Dept. of Mathematics)

    1992-07-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical'' geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly- discontinuous'' behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  6. On canonical cylinder sections for accurate determination of contact angle in microgravity

    SciTech Connect

    Concus, P.; Zabihi, F. |; Finn, R.

    1992-07-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. ``Canonical`` geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired ``nearly- discontinuous`` behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  7. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    NASA Astrophysics Data System (ADS)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  8. Misestimation of temperature when applying Maxwellian distributions to space plasmas described by kappa distributions

    NASA Astrophysics Data System (ADS)

    Nicolaou, Georgios; Livadiotis, George

    2016-11-01

    This paper presents the misestimation of temperature when observations from a kappa distributed plasma are analyzed as a Maxwellian. One common method to calculate the space plasma parameters is by fitting the observed distributions using known analytical forms. More often, the distribution function is included in a forward model of the instrument's response, which is used to reproduce the observed energy spectrograms for a given set of plasma parameters. In both cases, the modeled plasma distribution fits the measurements to estimate the plasma parameters. The distribution function is often considered to be Maxwellian even though in many cases the plasma is better described by a kappa distribution. In this work we show that if the plasma is described by a kappa distribution, the derived temperature assuming Maxwell distribution can be significantly off. More specifically, we derive the plasma temperature by fitting a Maxwell distribution to pseudo-data produced by a kappa distribution, and then examine the difference of the derived temperature as a function of the kappa index. We further consider the concept of using a forward model of a typical plasma instrument to fit its observations. We find that the relative error of the derived temperature is highly depended on the kappa index and occasionally on the instrument's field of view and response.

  9. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2012-12-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and were used to solve entirely different problems. We show that by combining two classical models, namely Boussinesq's Equation describing spring baseflow recession and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean residence time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater residence time that can refine those obtained from tritium measurements. This approach is demonstrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the best agreement between observed and predicted time of trend reversal was reached for the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating the stronger influence of continuous groundwater recharge during the dry period.

  10. Early auroral observations

    NASA Astrophysics Data System (ADS)

    Silverman, S.

    1998-06-01

    Early auroral observations from Europe and Asia, and catalogs of these observations, are described and discussed. Cautions to be aware of when using these data include the dating of the observation, and the cultural context, especially for observations included in histories and annals as omens and portents. Specific attention is then paid to observations from classical Greece and Rome, the Middle East in biblical times, Asian annals, and the period from late antiquity through the medieval period.

  11. The challenge of accurately documenting bee species richness in agroecosystems: bee diversity in eastern apple orchards.

    PubMed

    Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan

    2015-09-01

    Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured <75% of expected species richness at more than half of the sites. For most of these, the variation in bee community composition between years was greater than among sites. Species richness was influenced by percent agriculture, orchard size, and sampling effort, but we found no factors explaining the difference between observed and expected species richness. Competition between honeybees and wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness.

  12. The challenge of accurately documenting bee species richness in agroecosystems: bee diversity in eastern apple orchards.

    PubMed

    Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan

    2015-09-01

    Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured <75% of expected species richness at more than half of the sites. For most of these, the variation in bee community composition between years was greater than among sites. Species richness was influenced by percent agriculture, orchard size, and sampling effort, but we found no factors explaining the difference between observed and expected species richness. Competition between honeybees and wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness. PMID:26380684

  13. The challenge of accurately documenting bee species richness in agroecosystems: bee diversity in eastern apple orchards

    PubMed Central

    Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan

    2015-01-01

    Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured <75% of expected species richness at more than half of the sites. For most of these, the variation in bee community composition between years was greater than among sites. Species richness was influenced by percent agriculture, orchard size, and sampling effort, but we found no factors explaining the difference between observed and expected species richness. Competition between honeybees and wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness. PMID:26380684

  14. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    NASA Astrophysics Data System (ADS)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.

  15. Lunar occultation of Saturn. IV - Astrometric results from observations of the satellites

    NASA Technical Reports Server (NTRS)

    Dunham, D. W.; Elliot, J. L.

    1978-01-01

    The method of determining local lunar limb slopes, and the consequent time scale needed for diameter studies, from accurate occultation timings at two nearby telescopes is described. Results for photoelectric observations made at Mauna Kea Observatory during the occultation of Saturn's satellites on March 30, 1974, are discussed. Analysis of all observations of occultations of Saturn's satellites during 1974 indicates possible errors in the ephemerides of Saturn and its satellites.

  16. Accurate Classification of RNA Structures Using Topological Fingerprints

    PubMed Central

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  17. Some strategies to address the challenges of collecting observational data in a busy clinical environment.

    PubMed

    Jackson, Debra; McDonald, Glenda; Luck, Lauretta; Waine, Melissa; Wilkes, Lesley

    2016-01-01

    Studies drawing on observational methods can provide vital data to enhance healthcare. However, collecting observational data in clinical settings is replete with challenges, particularly where multiple data-collecting observers are used. Observers collecting data require shared understanding and training to ensure data quality, and particularly, to confirm accurate and consistent identification, discrimination and recording of data. The aim of this paper is to describe strategies for preparing and supporting multiple researchers tasked with collecting observational data in a busy, and often unpredictable, hospital environment. We hope our insights might assist future researchers undertaking research in similar settings. PMID:27188039

  18. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    NASA Astrophysics Data System (ADS)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  19. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  20. Polyhedral Observation Cupola

    NASA Technical Reports Server (NTRS)

    Edelstein, Karen S.; Valle, Gerald D.

    1990-01-01

    Strong, lightweight structure includes facets with windows. Report describes concept for observation cupola for Space Station Freedom. Cupola used by crewmembers to observe docking of Space Shuttle, servicing of payloads, extravehicular activity, and other operations in which they could help by observing. Includes computer-generated pictures realistically depicting crewmembers' positions, workstation positions, and views through various windows.

  1. Teacher Observation Scales.

    ERIC Educational Resources Information Center

    Purdue Univ., Lafayette, IN. Educational Research Center.

    The Teacher Observation Scales include four instruments: Observer Rating Scale (ORS), Reading Strategies Check List, Arithmetic Strategies Check List, and Classroom Description. These instruments utilize trained observers to describe the teaching behavior, instructional strategies and physical characteristics in each classroom. On the ORS, teacher…

  2. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    NASA Astrophysics Data System (ADS)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  3. An algorithm to detect and communicate the differences in computational models describing biological systems

    PubMed Central

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-01-01

    Motivation: Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model’s development over time. Results: Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models’ encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model’s history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. Availability and implementation: The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. Contact: martin.scharm@uni-rostock.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26490504

  4. Enumerating the Progress of SETI Observations

    NASA Astrophysics Data System (ADS)

    Lesh, Lindsay; Tarter, Jill C.

    2015-01-01

    In a long-term project like SETI, accurate archiving of observations is imperative. This requires a database that is both easy to search - in order to know what data has or hasn't been acquired - and easy to update, no matter what form the results of an observation might be reported in. If the data can all be standardized, then the parameters of the nine-dimensional search space (including space, time, frequency (and bandwidth), sensitivity, polarization and modulation scheme) of completed observations for engineered signals can be calculated and compared to the total possible search volume. Calculating a total search volume that includes more than just spatial dimensions needs an algorithm that can adapt to many different variables, (e.g. each receiving instrument's capabilities). The method of calculation must also remain consistent when applied to each new SETI observation if an accurate fraction of the total search volume is to be found. Any planned observations can be evaluated against what has already been done in order to assess the efficacy of a new search. Progress against a desired goal can be evaluated, and the significance of null results can be properly understood.This paper describes a new, user-friendly archive and standardized computational tool that are being built at the SETI Institute in order to greatly ease the addition of new entries and the calculation of the search volume explored to date. The intent is to encourage new observers to better report the parameters and results of their observations, and to improve public understanding of ongoing progress and the importance of continuing the search for ETI signals into the future.

  5. Can an ab initio three-body virial equation describe the mercury gas phase?

    PubMed

    Wiebke, J; Wormit, M; Hellmann, R; Pahl, E; Schwerdtfeger, P

    2014-03-27

    We report a sixth-order ab initio virial equation of state (EOS) for mercury. The virial coefficients were determined in the temperature range from 500 to 7750 K using a three-body approximation to the N-body interaction potential. The underlying two-body and three-body potentials were fitted to highly accurate Coupled-Cluster interaction energies of Hg2 (Pahl, E.; Figgen, D.; Thierfelder, C.; Peterson, K. A.; Calvo, F.; Schwerdtfeger, P. J. Chem. Phys. 2010, 132, 114301-1) and equilateral-triangular configurations of Hg3. We find the virial coefficients of order four and higher to be negative and to have large absolute values over the entire temperature range considered. The validity of our three-body, sixth-order EOS seems to be limited to small densities of about 1.5 g cm(-3) and somewhat higher densities at higher temperatures. Termwise analysis and comparison to experimental gas-phase data suggest a small convergence radius of the virial EOS itself as well as a failure of the three-body interaction model (i.e., poor convergence of the many-body expansion for mercury). We conjecture that the nth-order term of the virial EOS is to be evaluated from the full n-body interaction potential for a quantitative picture. Consequently, an ab initio three-body virial equation cannot describe the mercury gas phase. PMID:24547987

  6. Improved method of exponential sum fitting of transmissions to describe the absorption of atmospheric gases.

    PubMed

    Armbruster, W; Fischer, J

    1996-04-20

    For climate modeling and atmospheric research, such as investigations of global climate change, remote sensing of cloud properties, or the missing absorption problem in clouds, it is most important to describe adequately the absorption of radiation by atmospheric gases. An improved method for the exponential sum fitting of transmissions (ESPT) is developed to approximate this absorption accurately. Exponentials are estimated separately for any number of atmospheric-model layers, considering the pressure and temperature dependence of the absorption lines directly. As long as the error of the fit exceeds a limit of tolerance, the number of considered exponential terms is successively increased. The accuracy of the method presented yields a root-mean-square error of less than 0.03% for any atmospheric-model layer, whereas the commonly used one-layer techniques gain errors of up to 3% in the transmission functions for the upper layers. The commonly used ESPT methods consider only one atmospheric layer and introduce the pressure and temperature effects for the other model layers afterward.

  7. The usefulness of higher-order constitutive relations for describing the Knudsen layer.

    SciTech Connect

    Gallis, Michail A.; Lockerby, Duncan A.; Reese, Jason M.

    2005-03-01

    The Knudsen layer is an important rarefaction phenomenon in gas flows in and around microdevices. Its accurate and efficient modeling is of critical importance in the design of such systems and in predicting their performance. In this paper we investigate the potential that higher-order continuum equations may have to model the Knudsen layer, and compare their predictions to high-accuracy DSMC (direct simulation Monte Carlo) data, as well as a standard result from kinetic theory. We find that, for a benchmark case, the most common higher-order continuum equation sets (Grad's 13 moment, Burnett, and super-Burnett equations) cannot capture the Knudsen layer. Variants of these equation families have, however, been proposed and some of them can qualitatively describe the Knudsen layer structure. To make quantitative comparisons, we obtain additional boundary conditions (needed for unique solutions to the higher-order equations) from kinetic theory. However, we find the quantitative agreement with kinetic theory and DSMC data is only slight.

  8. A method for describing the canopy architecture of coppice poplar with allometric relationships.

    PubMed

    Casella, Eric; Sinoquet, Hervé

    2003-12-01

    A multi-scale biometric methodology for describing the architecture of fast-growing short-rotation woody crops is used to describe 2-year-old poplar clones during the second rotation. To allow for expressions of genetic variability observed within this species (i.e., growth potential, leaf morphology, coppice and canopy structure), the method has been applied to two clones: Ghoy (Gho) (Populus deltoides Bartr. ex Marsh. x Populus nigra L.) and Trichobel (Tri) (Populus trichocarpa Torr. & A. Gray x Populus trichocarpa). The method operates at the stool level and describes the plant as a collection of components (shoots and branches) described as a collection of metameric elements, themselves defined as a collection of elementary units (internode, petiole, leaf blade). Branching and connection between the plant units (i.e., plant topology) and their spatial location, orientation, size and shape (i.e., plant geometry) describe the plant architecture. The methodology has been used to describe the plant architecture of 15 selected stools per clone over a 5-month period. On individual stools, shoots have been selected from three classes (small, medium and large) spanning the diameter distribution range. Using a multi-scale approach, empirical allometric relationships were used to parameterize elementary units of the plant, topological relationships and geometry (e.g., distribution of shoot diameters on stool, shoot attributes from shoot diameter). The empirical functions form the basis of the 3-D Coppice Poplar Canopy Architecture model (3-D CPCA), which recreates the architecture and canopy structure of fast-growing coppice crops at the plot scale. Model outputs are assessed through visual and quantitative comparisons between actual photographs of the coppice canopy and simulated images. Overall, results indicate a good predictive ability of the 3-D CPCA model.

  9. Development of a ReaxFF potential for Pt-O systems describing the energetics and dynamics of Pt-oxide formation.

    PubMed

    Fantauzzi, Donato; Bandlow, Jochen; Sabo, Lehel; Mueller, Jonathan E; van Duin, Adri C T; Jacob, Timo

    2014-11-14

    ReaxFF force field parameters describing Pt-Pt and Pt-O interactions have been developed and tested. The Pt-Pt parameters are shown to accurately account for the chemical nature, atomic structures and other materials properties of bulk platinum phases, low and high-index platinum surfaces and nanoclusters. The Pt-O parameters reliably describe bulk platinum oxides, as well as oxygen adsorption and oxide formation on Pt(111) terraces and the {111} and {100} steps connecting them. Good agreement between the force field and both density functional theory (DFT) calculations and experimental observations is demonstrated in the relative surface free energies of high symmetry Pt-O surface phases as a function of the oxygen chemical potential, making ReaxFF an ideal tool for more detailed investigations of more complex Pt-O surface structures. Validation for its application to studies of the kinetics and dynamics of surface oxide formation in the context of either molecular dynamics (MD) or Monte Carlo simulations are provided in part by a two-part investigation of oxygen diffusion on Pt(111), in which nudged elastic band (NEB) calculations and MD simulations are used to characterize diffusion processes and to determine the relevant diffusion coefficients and barriers. Finally, the power of the ReaxFF reactive force field approach in addressing surface structures well beyond the reach of routine DFT calculations is exhibited in a brief proof-of-concept study of oxygen adsorbate displacement within ordered overlayers.

  10. Accurate Sound Velocity Measurement in Ocean Near-Surface Layer

    NASA Astrophysics Data System (ADS)

    Lizarralde, D.; Xu, B. L.

    2015-12-01

    Accurate sound velocity measurement is essential in oceanography because sound is the only wave that can propagate in sea water. Due to its measuring difficulties, sound velocity is often not measured directly but instead calculated from water temperature, salinity, and depth, which are much easier to obtain. This research develops a new method to directly measure the sound velocity in the ocean's near-surface layer using multi-channel seismic (MCS) hydrophones. This system consists of a device to make a sound pulse and a long cable with hundreds of hydrophones to record the sound. The distance between the source and each receiver is the offset. The time it takes the pulse to arrive to each receiver is the travel time.The errors of measuring offset and travel time will affect the accuracy of sound velocity if we calculated with just one offset and one travel time. However, by analyzing the direct arrival signal from hundreds of receivers, the velocity can be determined as the slope of a straight line in the travel time-offset graph. The errors in distance and time measurement result in only an up or down shift of the line and do not affect the slope. This research uses MCS data of survey MGL1408 obtained from the Marine Geoscience Data System and processed with Seismic Unix. The sound velocity can be directly measured to an accuracy of less than 1m/s. The included graph shows the directly measured velocity verses the calculated velocity along 100km across the Mid-Atlantic continental margin. The directly measured velocity shows a good coherence to the velocity computed from temperature and salinity. In addition, the fine variations in the sound velocity can be observed, which is hardly seen from the calculated velocity. Using this methodology, both large area acquisition and fine resolution can be achieved. This directly measured sound velocity will be a new and powerful tool in oceanography.

  11. Accurate source location from P waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  12. Accurate source location from waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  13. How accurate are the weather forecasts for Bierun (southern Poland)?

    NASA Astrophysics Data System (ADS)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  14. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  15. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  16. An accurate, efficient algorithm for calculation of quantum transport in extended structures

    SciTech Connect

    Godin, T.J.; Haydock, R.

    1994-05-01

    In device structures with dimensions comparable to carrier inelastic scattering lengths, the quantum nature of carriers will cause interference effects that cannot be modeled by conventional techniques. The basic equations that govern these ``quantum`` circuit elements present significant numerical challenges. The authors describe the block recursion method, an accurate, efficient method for solving the quantum circuit problem. They demonstrate this method by modeling dirty inversion layers.

  17. Accurate determination of atomic structure of multiwalled carbon nanotubes by nondestructive nanobeam electron diffraction

    SciTech Connect

    Liu Zejian; Zhang Qi; Qin Luchang

    2005-05-09

    We report a method that allows direct, systematic, and accurate determination of the atomic structure of multiwalled carbon nanotubes by analyzing the scattering intensities on the nonequatorial layer lines in the electron diffraction pattern. Complete structure determination of a quadruple-walled carbon nanotube is described as an example, and it was found that the intertubular distance varied from 0.36 nm to 0.5 nm with a mean value of 0.42 nm.

  18. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  19. Accurate GPS Time-Linked data Acquisition System (ATLAS II) user's manual.

    SciTech Connect

    Jones, Perry L.; Zayas, Jose R.; Ortiz-Moyet, Juan

    2004-02-01

    The Accurate Time-Linked data Acquisition System (ATLAS II) is a small, lightweight, time-synchronized, robust data acquisition system that is capable of acquiring simultaneous long-term time-series data from both a wind turbine rotor and ground-based instrumentation. This document is a user's manual for the ATLAS II hardware and software. It describes the hardware and software components of ATLAS II, and explains how to install and execute the software.

  20. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.

    1996-09-03

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.

  1. Rapid accurate isotopic measurements on boron in boric acid and boron carbide.

    PubMed

    Duchateau, N L; Verbruggen, A; Hendrickx, F; De Bièvre, P

    1986-04-01

    A procedure is described whereby rapid and accurate isotopic measurements can be performed on boron in boric acid and boron carbide after fusion of these compounds with calcium carbonate. It allows the determination of the isotopic composition of boron in boric acid and boron carbide and the direct assay of boron or the (10)B isotope in boron carbide by isotope-dilution mass spectrometry.

  2. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.

    1996-01-01

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.

  3. A simplified hydroethidine method for fast and accurate detection of superoxide production in isolated mitochondria.

    PubMed

    Back, Patricia; Matthijssens, Filip; Vanfleteren, Jacques R; Braeckman, Bart P

    2012-04-01

    Because superoxide is involved in various physiological processes, many efforts have been made to improve its accurate quantification. We optimized and validated a superoxide-specific and -sensitive detection method. The protocol is based on fluorescence detection of the superoxide-specific hydroethidine (HE) oxidation product, 2-hydroxyethidium. We established a method for the quantification of superoxide production in isolated mitochondria without the need for acetone extraction and purification chromatography as described in previous studies.

  4. How many spectral bands are necessary to describe the directional reflectance of beach sands?

    NASA Astrophysics Data System (ADS)

    Doctor, Katarina Z.; Ackleson, Steven G.; Bachmann, Charles M.; Gray, Deric J.; Montes, Marcos J.; Fusina, Robert A.; Houser, Paul R.

    2016-05-01

    Spectral variability in the visible, near-infrared and shortwave directional reflectance factor of beach sands and freshwater sheet flow is examined using principal component and correlation matrix analysis of in situ measurements. In previous work we concluded that the hyperspectral bidirectional reflectance distribution function (BRDF) of beach sands in the absence of sheet flow exhibit weak spectral variability, the majority of which can be described with three broad spectral bands with wavelength ranges of 350-450 nm, 700-1350 nm, and 1450-2400 nm.1 Observing sheet flow on sand we find that a thin layer of water enhances reflectance in the specular direction at all wavelengths and that spectral variability may be described using four spectral band regions of 350-450 nm, 500-950 nm, 950-1350 nm, and 1450-2400 nm. Spectral variations are more evident in sand surfaces of greater visual roughness than in smooth surfaces, regardless of sheet flow.

  5. What types of terms do people use when describing an individual's personality?

    PubMed

    Leising, Daniel; Scharloth, Joachim; Lohse, Oliver; Wood, Dustin

    2014-09-01

    An important yet untested assumption within personality psychology is that more important person characteristics are more densely reflected in language. We investigated how ratings of importance and other term properties are associated with one another and with a term's frequency of use. Research participants were asked to provide terms that described individuals they knew, which resulted in a set of 624 adjectives. These terms were independently rated for importance, social desirability, observability, stateness versus traitness, level of abstraction, and base rate. Terms rated as describing more important person characteristics were in fact used more often by the participants in the sample and in a large corpus of online communications (close to 500 million words). More frequently used terms and more positive terms were also rated as being more abstract, more traitlike, and more widely applicable (i.e., having a greater base rate). We discuss the implications of these findings with regard to person perception in general.

  6. Determining suitable image resolutions for accurate supervised crop classification using remote sensing data

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Duveiller, Grégory

    2013-10-01

    Mapping the spatial distribution of crops has become a fundamental input for agricultural production monitoring using remote sensing. However, the multi-temporality that is often necessary to accurately identify crops and to monitor crop growth generally comes at the expense of coarser observation supports, and can lead to increasingly erroneous class allocations caused by mixed pixels. For a given application like crop classification, the spatial resolution requirement (e.g. in terms of a maximum tolerable pixel size) differs considerably over different landscapes. To analyse the spatial resolution requirements for accurate crop identification via image classification, this study builds upon and extends a conceptual framework established in a previous work1. This framework allows defining quantitatively the spatial resolution requirements for crop monitoring based on simulating how agricultural landscapes, and more specifically the fields covered by a crop of interest, are seen by instruments with increasingly coarser resolving power. The concept of crop specific pixel purity, defined as the degree of homogeneity of the signal encoded in a pixel with respect to the target crop type, is used to analyse how mixed the pixels can be (as they become coarser), without undermining their capacity to describe the desired surface properties. In this case, this framework has been steered towards answering the question: "What is the spatial resolution requirement for crop identification via supervised image classification, in particular minimum and coarsest acceptable pixel sizes, and how do these requirements change over different landscapes?" The framework is applied over four contrasting agro-ecological landscapes in Middle Asia. Inputs to the experiment were eight multi-temporal images from the RapidEye sensor, the simulated pixel sizes range from 6.5 m to 396.5 m. Constraining parameters for crop identification were defined by setting thresholds for classification

  7. Accurate molecular structure and spectroscopic properties for nucleobases: A combined computational - microwave investigation of 2-thiouracil as a case study

    PubMed Central

    Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L.

    2015-01-01

    The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier Transform Microwave spectrometers. The joint experimental – computational study allowed us to determine accurate molecular structure and spectroscopic properties for the title molecule, but more important, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules. PMID:24002739

  8. History and progress on accurate measurements of the Planck constant.

    PubMed

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  9. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  10. Merging quantum-chemistry with B-splines to describe molecular photoionization

    NASA Astrophysics Data System (ADS)

    Argenti, L.; Marante, C.; Klinker, M.; Corral, I.; Gonzalez, J.; Martin, F.

    2016-05-01

    Theoretical description of observables in attosecond pump-probe experiments requires a good representation of the system's ionization continuum. For polyelectronic atoms and molecules, however, this is still a challenge, due to the complicated short-range structure of correlated electronic wavefunctions. Whereas quantum chemistry packages (QCP) implementing sophisticated methods to compute bound electronic molecular states are well established, comparable tools for the continuum are not widely available yet. To tackle this problem, we have developed a new approach that, by means of a hybrid Gaussian-B-spline basis, interfaces existing QCPs with close-coupling scattering methods. To illustrate the viability of this approach, we report results for the multichannel ionization of the helium atom and of the hydrogen molecule that are in excellent agreement with existing accurate benchmarks. These findings, together with the flexibility of QCPs, make of this approach a good candidate for the theoretical study of the ionization of poly-electronic systems. FP7/ERC Grant XCHEM 290853.

  11. Accurately measuring MPI broadcasts in a computational grid

    SciTech Connect

    Karonis N T; de Supinski, B R

    1999-05-06

    An MPI library's implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies a long different links can vary significantly. Thus, an algorithm's performance is difficult to predict from it's communication pattern. Even when accurate pre-diction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies. choose between different available implementations. Also, accurate and complete measurements could guide use of a given implementation to improve application performance. These choices will become even more important as grid-enabled MPI libraries [6, 7] become more common since bad choices are likely to cost significantly more in grid environments. In short, the distributed processing community needs accurate, succinct and complete measurements of collective communications performance. Since successive collective communications can often proceed concurrently, accurately measuring them is difficult. Some benchmarks use knowledge of the communication algorithm to predict the

  12. Observing System Simulation Experiments

    NASA Technical Reports Server (NTRS)

    Prive, Nikki

    2015-01-01

    This presentation gives an overview of Observing System Simulation Experiments (OSSEs). The components of an OSSE are described, along with discussion of the process for validating, calibrating, and performing experiments. a.

  13. Accurate body composition measures from whole-body silhouettes

    PubMed Central

    Xie, Bowen; Avila, Jesus I.; Ng, Bennett K.; Fan, Bo; Loo, Victoria; Gilsanz, Vicente; Hangartner, Thomas; Kalkwarf, Heidi J.; Lappe, Joan; Oberfield, Sharon; Winer, Karen; Zemel, Babette; Shepherd, John A.

    2015-01-01

    Purpose: Obesity and its consequences, such as diabetes, are global health issues that burden about 171 × 106 adult individuals worldwide. Fat mass index (FMI, kg/m2), fat-free mass index (FFMI, kg/m2), and percent fat mass may be useful to evaluate under- and overnutrition and muscle development in a clinical or research environment. This proof-of-concept study tested whether frontal whole-body silhouettes could be used to accurately measure body composition parameters using active shape modeling (ASM) techniques. Methods: Binary shape images (silhouettes) were generated from the skin outline of dual-energy x-ray absorptiometry (DXA) whole-body scans of 200 healthy children of ages from 6 to 16 yr. The silhouette shape variation from the average was described using an ASM, which computed principal components for unique modes of shape. Predictive models were derived from the modes for FMI, FFMI, and percent fat using stepwise linear regression. The models were compared to simple models using demographics alone [age, sex, height, weight, and body mass index z-scores (BMIZ)]. Results: The authors found that 95% of the shape variation of the sampled population could be explained using 26 modes. In most cases, the body composition variables could be predicted similarly between demographics-only and shape-only models. However, the combination of shape with demographics improved all estimates of boys and girls compared to the demographics-only model. The best prediction models for FMI, FFMI, and percent fat agreed with the actual measures with R2 adj. (the coefficient of determination adjusted for the number of parameters used in the model equation) values of 0.86, 0.95, and 0.75 for boys and 0.90, 0.89, and 0.69 for girls, respectively. Conclusions: Whole-body silhouettes in children may be useful to derive estimates of body composition including FMI, FFMI, and percent fat. These results support the feasibility of measuring body composition variables from simple

  14. New simple method for fast and accurate measurement of volumes

    NASA Astrophysics Data System (ADS)

    Frattolillo, Antonio

    2006-04-01

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  15. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  16. A Markov chain probability model to describe wet and dry patterns of weather at Colombo

    NASA Astrophysics Data System (ADS)

    Sonnadara, D. U. J.; Jayewardene, D. R.

    2015-01-01

    The hypothesis that the wet and dry patterns of daily precipitation observed in Colombo can be modeled by a first order Markov chain model was tested using daily rainfall data for a 60-year period (1941-2000). The probability of a day being wet or dry was defined with respect to the status of the previous day. Probabilities were assumed to be stationary within a given month. Except for isolated single events, the model is shown to describe the observed sequence of wet and dry spells satisfactorily depending on the season. The accuracy of modeling wet spells is high compared to dry spells. When the model-predicted mean length of wet spells for each month was compared with the estimated values from the data set, a reasonable agreement between the model prediction and estimation is seen (within ±0.1). In general, the data show a higher disagreement for the months having longer dry spells. The mean annual duration of wet spells is 2.6 days while the mean annual duration of dry spells is 3.8 days. It is shown that the model can be used to explore the return periods of long wet and dry spells. We conclude from the study that the Markov chain of order 1 is adequate to describe wet and dry patterns of weather in Colombo.

  17. A simple framework to describe the regulation of gene expression in prokaryotes.

    PubMed

    Alves, Filipa; Dilão, Rui

    2005-05-01

    Based on the bimolecular mass action law and the derived mass conservation laws, we propose a mathematical framework in order to describe the regulation of gene expression in prokaryotes. It is shown that the derived models have all the qualitative properties of the activation and inhibition regulatory mechanisms observed in experiments. The basic construction considers genes as templates for protein production, where regulation processes result from activators or repressors connecting to DNA binding sites. All the parameters in the models have a straightforward biological meaning. After describing the general properties of the basic mechanisms of positive and negative gene regulation, we apply this framework to the self-regulation of the trp operon and to the genetic switch involved in the regulation of the lac operon. One of the consequences of this approach is the existence of conserved quantities depending on the initial conditions that tune bifurcations of fixed points. This leads naturally to a simple explanation of threshold effects as observed in some experiments. PMID:15948632

  18. A Physiologically Based Pharmacokinetic Model to Describe Artemether Pharmacokinetics in Adult and Pediatric Patients.

    PubMed

    Lin, Wen; Heimbach, Tycho; Jain, Jay Prakash; Awasthi, Rakesh; Hamed, Kamal; Sunkara, Gangadhar; He, Handan

    2016-10-01

    Artemether is co-administered with lumefantrine as part of a fixed-dose combination therapy for malaria in both adult and pediatric patients. However, artemether exposure is higher in younger infants (1-3 months) with a lower body weight (<5 kg) as compared to older infants (3-6 months) with a higher body weight (≥5 to <10 kg), children, and adults. In contrast, lumefantrine exposure is similar in all age groups. This article describes the clinically observed artemether exposure data in pediatric populations across various age groups (1 month to 12 years) and body weights (<5 or ≥5 kg) using physiologically based pharmacokinetic (PBPK) mechanistic models. A PBPK model was developed using artemether physicochemical, biopharmaceutic, and metabolic properties together with known enzyme ontogeny and pediatric physiology. The model was verified using clinical data from adult patients after multiple doses of oral artemether, and was then applied to simulate the exposure in children and infants. The simulated PBPK concentration-time profiles captured observed clinical data. Consistent with the clinical data, the PBPK model simulations indicated a higher artemether exposure for younger infants with lower body weight. A PBPK model developed for artemether reliably described the clinical data from adult and pediatric patients. PMID:27506269

  19. Development of a Composite Non-Electrostatic Surface Complexation Model Describing Plutonium Sorption to Aluminosilicates

    SciTech Connect

    Powell, B A; Kersting, A; Zavarin, M; Zhao, P

    2008-10-28

    Due to their ubiquity in nature and chemical reactivity, aluminosilicate minerals play an important role in retarding actinide subsurface migration. However, very few studies have examined Pu interaction with clay minerals in sufficient detail to produce a credible mechanistic model of its behavior. In this work, Pu(IV) and Pu(V) interactions with silica, gibbsite (Aloxide), and Na-montmorillonite (smectite clay) were examined as a function of time and pH. Sorption of Pu(IV) and Pu(V) to gibbsite and silica increased with pH (4 to 10). The Pu(V) sorption edge shifted to lower pH values over time and approached that of Pu(IV). This behavior is apparently due to surface mediated reduction of Pu(V) to Pu(IV). Surface complexation constants describing Pu(IV)/Pu(V) sorption to aluminol and silanol groups were developed from the silica and gibbsite sorption experiments and applied to the montmorillonite dataset. The model provided an acceptable fit to the montmorillonite sorption data for Pu(V). In order to accurately predict Pu(IV) sorption to montmorillonite, the model required inclusion of ion exchange. The objective of this work is to measure the sorption of Pu(IV) and Pu(V) to silica, gibbsite, and smectite (montmorillonite). Aluminosilicate minerals are ubiquitous at the Nevada National Security Site and improving our understanding of Pu sorption to aluminosilicates (smectite clays in particular) is essential to the accurate prediction of Pu transport rates. These data will improve the mechanistic approach for modeling the hydrologic source term (HST) and provide sorption Kd parameters for use in CAU models. In both alluvium and tuff, aluminosilicates have been found to play a dominant role in the radionuclide retardation because their abundance is typically more than an order of magnitude greater than other potential sorbing minerals such as iron and manganese oxides (e.g. Vaniman et al., 1996). The sorption database used in recent HST models (Carle et al., 2006

  20. Ligand-Induced Protein Responses and Mechanical Signal Propagation Described by Linear Response Theories

    PubMed Central

    Yang, Lee-Wei; Kitao, Akio; Huang, Bang-Chieh; Gō, Nobuhiro

    2014-01-01

    In this study, a general linear response theory (LRT) is formulated to describe time-dependent and -independent protein conformational changes upon CO binding with myoglobin. Using the theory, we are able to monitor protein relaxation in two stages. The slower relaxation is found to occur from 4.4 to 81.2 picoseconds and the time constants characterized for a couple of aromatic residues agree with those observed by UV Resonance Raman (UVRR) spectrometry and time resolved x-ray crystallography. The faster “early responses”, triggered as early as 400 femtoseconds, can be best described by the theory when impulse forces are used. The newly formulated theory describes the mechanical propagation following ligand-binding as a function of time, space and types of the perturbation forces. The “disseminators”, defined as the residues that propagate signals throughout the molecule the fastest among all the residues in protein when perturbed, are found evolutionarily conserved and the mutations of which have been shown to largely change the CO rebinding kinetics in myoglobin. PMID:25229149

  1. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  2. Observing Double Stars

    NASA Astrophysics Data System (ADS)

    Genet, Russell M.; Fulton, B. J.; Bianco, Federica B.; Martinez, John; Baxter, John; Brewer, Mark; Carro, Joseph; Collins, Sarah; Estrada, Chris; Johnson, Jolyon; Salam, Akash; Wallen, Vera; Warren, Naomi; Smith, Thomas C.; Armstrong, James D.; McGaughey, Steve; Pye, John; Mohanan, Kakkala; Church, Rebecca

    2012-05-01

    Double stars have been systematically observed since William Herschel initiated his program in 1779. In 1803 he reported that, to his surprise, many of the systems he had been observing for a quarter century were gravitationally bound binary stars. In 1830 the first binary orbital solution was obtained, leading eventually to the determination of stellar masses. Double star observations have been a prolific field, with observations and discoveries - often made by students and amateurs - routinely published in a number of specialized journals such as the Journal of Double Star Observations. All published double star observations from Herschel's to the present have been incorporated in the Washington Double Star Catalog. In addition to reviewing the history of visual double stars, we discuss four observational technologies and illustrate these with our own observational results from both California and Hawaii on telescopes ranging from small SCTs to the 2-meter Faulkes Telescope North on Haleakala. Two of these technologies are visual observations aimed primarily at published "hands-on" student science education, and CCD observations of both bright and very faint doubles. The other two are recent technologies that have launched a double star renaissance. These are lucky imaging and speckle interferometry, both of which can use electron-multiplying CCD cameras to allow short (30 ms or less) exposures that are read out at high speed with very low noise. Analysis of thousands of high speed exposures allows normal seeing limitations to be overcome so very close doubles can be accurately measured.

  3. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  4. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  5. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2013-05-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and have been used to solve entirely different problems. We show that by combining two classical models, namely the Boussinesq equation describing spring baseflow recession, and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean transit time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater transit time that can refine those obtained from tritium measurements. The approach is illustrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the actual time of trend reversal and the rate of change agreed extremely well with the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating a stronger influence of continuous groundwater recharge during the summer months.

  6. First-principles-based multiscale, multiparadigm molecular mechanics and dynamics methods for describing complex chemical processes.

    PubMed

    Jaramillo-Botero, Andres; Nielsen, Robert; Abrol, Ravi; Su, Julius; Pascal, Tod; Mueller, Jonathan; Goddard, William A

    2012-01-01

    We expect that systematic and seamless computational upscaling and downscaling for modeling, predicting, or optimizing material and system properties and behavior with atomistic resolution will eventually be sufficiently accurate and practical that it will transform the mode of development in the materials, chemical, catalysis, and Pharma industries. However, despite truly dramatic progress in methods, software, and hardware, this goal remains elusive, particularly for systems that exhibit inherently complex chemistry under normal or extreme conditions of temperature, pressure, radiation, and others. We describe here some of the significant progress towards solving these problems via a general multiscale, multiparadigm strategy based on first-principles quantum mechanics (QM), and the development of breakthrough methods for treating reaction processes, excited electronic states, and weak bonding effects on the conformational dynamics of large-scale molecular systems. These methods have resulted directly from filling in the physical and chemical gaps in existing theoretical and computational models, within the multiscale, multiparadigm strategy. To illustrate the procedure we demonstrate the application and transferability of such methods on an ample set of challenging problems that span multiple fields, system length- and timescales, and that lay beyond the realm of existing computational or, in some case, experimental approaches, including understanding the solvation effects on the reactivity of organic and organometallic structures, predicting transmembrane protein structures, understanding carbon nanotube nucleation and growth, understanding the effects of electronic excitations in materials subjected to extreme conditions of temperature and pressure, following the dynamics and energetics of long-term conformational evolution of DNA macromolecules, and predicting the long-term mechanisms involved in enhancing the mechanical response of polymer-based hydrogels.

  7. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  8. A high order accurate difference scheme for complex flow fields

    SciTech Connect

    Dexun Fu; Yanwen Ma

    1997-06-01

    A high order accurate finite difference method for direct numerical simulation of coherent structure in the mixing layers is presented. The reason for oscillation production in numerical solutions is analyzed. It is caused by a nonuniform group velocity of wavepackets. A method of group velocity control for the improvement of the shock resolution is presented. In numerical simulation the fifth-order accurate upwind compact difference relation is used to approximate the derivatives in the convection terms of the compressible N-S equations, a sixth-order accurate symmetric compact difference relation is used to approximate the viscous terms, and a three-stage R-K method is used to advance in time. In order to improve the shock resolution the scheme is reconstructed with the method of diffusion analogy which is used to control the group velocity of wavepackets. 18 refs., 12 figs., 1 tab.

  9. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  10. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  11. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  12. Accurate Cell Division in Bacteria: How Does a Bacterium Know Where its Middle Is?

    NASA Astrophysics Data System (ADS)

    Howard, Martin; Rutenberg, Andrew

    2004-03-01

    I will discuss the physical principles lying behind the acquisition of accurate positional information in bacteria. A good application of these ideas is to the rod-shaped bacterium E. coli which divides precisely at its cellular midplane. This positioning is controlled by the Min system of proteins. These proteins coherently oscillate from end to end of the bacterium. I will present a reaction-diffusion model that describes the diffusion of the Min proteins, and their binding/unbinding from the cell membrane. The system possesses an instability that spontaneously generates the Min oscillations, which control accurate placement of the midcell division site. I will then discuss the role of fluctuations in protein dynamics, and investigate whether fluctuations set optimal protein concentration levels. Finally I will examine cell division in a different bacteria, B. subtilis. where different physical principles are used to regulate accurate cell division. See: Howard, Rutenberg, de Vet: Dynamic compartmentalization of bacteria: accurate division in E. coli. Phys. Rev. Lett. 87 278102 (2001). Howard, Rutenberg: Pattern formation inside bacteria: fluctuations due to the low copy number of proteins. Phys. Rev. Lett. 90 128102 (2003). Howard: A mechanism for polar protein localization in bacteria. J. Mol. Biol. 335 655-663 (2004).

  13. A High-Order Accurate Parallel Solver for Maxwell's Equations on Overlapping Grids

    SciTech Connect

    Henshaw, W D

    2005-09-23

    A scheme for the solution of the time dependent Maxwell's equations on composite overlapping grids is described. The method uses high-order accurate approximations in space and time for Maxwell's equations written as a second-order vector wave equation. High-order accurate symmetric difference approximations to the generalized Laplace operator are constructed for curvilinear component grids. The modified equation approach is used to develop high-order accurate approximations that only use three time levels and have the same time-stepping restriction as the second-order scheme. Discrete boundary conditions for perfect electrical conductors and for material interfaces are developed and analyzed. The implementation is optimized for component grids that are Cartesian, resulting in a fast and efficient method. The solver runs on parallel machines with each component grid distributed across one or more processors. Numerical results in two- and three-dimensions are presented for the fourth-order accurate version of the method. These results demonstrate the accuracy and efficiency of the approach.

  14. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  15. Fast and accurate line scanner based on white light interferometry

    NASA Astrophysics Data System (ADS)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  16. A novel technique for highly accurate gas exchange measurements

    NASA Astrophysics Data System (ADS)

    Kalkenings, R. K.; Jähne, B. J.

    2003-04-01

    The Heidelberg Aeolotron is a circular wind-wave facility for investigating air-sea gas exchange. In this contribution a novel technique for measuring highly accurate transfer velocities k of mass transfer will be presented. Traditionally, in mass balance techniques the constant of decay for gas concentrations over time is measured. The major drawback of this concept is the long time constant. At low wind speeds and a water height greater than 1 m the period of observation has to be several days. In a gas-tight facility such as the Aeolotron, the transfer velocity k can be computed from the concentration in the water body and the change of concentration in the gas space. Owing to this fact, transfer velocities are gained while greatly reducing the measuring times to less than one hour. The transfer velocity k of a tracer can be parameterized as k=1/β \\cdot u_* \\cdot Sc^n, with the Schmidt Number Sc, shear velocity u_* and the dimensionless transfer resistance β. The Schmidt Number exponent n can be derived from simultaneous measurements of different tracers. Since these tracers are of different Schmidt number, the shear velocity is not needed. To allow for Schmidt numbers spanning a hole decade, in our experiments He, H_2, N_2O and F12 are used. The relative accuracy of measuring the transfer velocity was improved to less than 2%. In 9 consecutive experiments conducted at a wind speed of 6.2 m/s, the deviation of the Schmidt number exponent was found to be just under 0.02. This high accuracy will allow precisely determining the transition of the Schmidt number exponent from n=2/3 to n=0.5 from a flat to wavy water surface. In order to quantify gas exchange not only the wind speed is important. Surfactants have a pronounced effect on the wave field and lead to a drastic reduction in the transfer velocity. In the Aeolotron measurements were conducted with a variety of measuring devices, ranging from an imaging slope gauge (ISG) to thermal techniques with IR

  17. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  18. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  19. Must Kohn-Sham oscillator strengths be accurate at threshold?

    SciTech Connect

    Yang Zenghui; Burke, Kieron; Faassen, Meta van

    2009-09-21

    The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.

  20. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.