Science.gov

Sample records for accurately describe observables

  1. Tin phase transition in terapascal pressure range described accurately with Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Nazarov, Roman; Hood, Randolph; Morales, Miguel

    The accurate prediction of phase transitions is one of the most important research areas in modern materials science. The main workhorse for such calculations, Density functional theory (DFT), employs different forms of approximate exchange-correlation functionals which may lead to overstabilization of one phase compared to another, therefore, predict incorrectly phase transition pressures. A recent example of such deficiency has been demonstrated in Sn: no bcc to hcp phase transition has been observed in Sn when dynamically compressed to 1.2 TPa while DFT predicts a transition to occur at 0.16-0.2 TPa. To overcome the limitations of DFT, we have employed diffusion quantum Monte Carlo (DMC) method which treats the many body electron problem directly. In order to get highly accurate results we systematically assess the effect of controllable approximations of DMC such as fixed node approximation, finite-size effects and the use of pseudopotentials. Based on metrologically accurate DMC equation of states we construct the pressure-temperature phase diagram and demonstrate its good agreement with experiment in contrast to DFT calculations.

  2. A stochastic model of kinetochore-microtubule attachment accurately describes fission yeast chromosome segregation.

    PubMed

    Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline; Tournier, Sylvie; Gachet, Yannick

    2012-03-19

    In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B-like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B-like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy.

  3. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    SciTech Connect

    Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  4. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    PubMed Central

    Madebene, Bruno; Ulusoy, Inga; Mancera, Luis; Scribano, Yohann; Chulkov, Sergey

    2011-01-01

    Summary We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters. PMID:22003450

  5. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    SciTech Connect

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.

  6. Do modelled or satellite-based estimates of surface solar irradiance accurately describe its temporal variability?

    NASA Astrophysics Data System (ADS)

    Bengulescu, Marc; Blanc, Philippe; Boilley, Alexandre; Wald, Lucien

    2017-02-01

    This study investigates the characteristic time-scales of variability found in long-term time-series of daily means of estimates of surface solar irradiance (SSI). The study is performed at various levels to better understand the causes of variability in the SSI. First, the variability of the solar irradiance at the top of the atmosphere is scrutinized. Then, estimates of the SSI in cloud-free conditions as provided by the McClear model are dealt with, in order to reveal the influence of the clear atmosphere (aerosols, water vapour, etc.). Lastly, the role of clouds on variability is inferred by the analysis of in-situ measurements. A description of how the atmosphere affects SSI variability is thus obtained on a time-scale basis. The analysis is also performed with estimates of the SSI provided by the satellite-derived HelioClim-3 database and by two numerical weather re-analyses: ERA-Interim and MERRA2. It is found that HelioClim-3 estimates render an accurate picture of the variability found in ground measurements, not only globally, but also with respect to individual characteristic time-scales. On the contrary, the variability found in re-analyses correlates poorly with all scales of ground measurements variability.

  7. Describing and compensating gas transport dynamics for accurate instantaneous emission measurement

    NASA Astrophysics Data System (ADS)

    Weilenmann, Martin; Soltic, Patrik; Ajtay, Delia

    Instantaneous emission measurements on chassis dynamometers and engine test benches are becoming increasingly usual for car-makers and for environmental emission factor measurement and calculation, since much more information about the formation conditions can be extracted than from the regulated bag measurements (integral values). The common exhaust gas analysers for the "regulated pollutants" (carbon monoxide, total hydrocarbons, nitrogen oxide, carbon dioxide) allow measurement at a rate of one to ten samples per second. This gives the impression of having after-the-catalyst emission information with that chronological precision. It has been shown in recent years, however, that beside the reaction time of the analysers, the dynamics of gas transport in both the exhaust system of the car and the measurement system last significantly longer than 1 s. This paper focuses on the compensation of all these dynamics convoluting the emission signals. Most analysers show linear and time-invariant reaction dynamics. Transport dynamics can basically be split into two phenomena: a pure time delay accounting for the transport of the gas downstream and a dynamic signal deformation since the gas is mixed by turbulence along the way. This causes emission peaks to occur which are smaller in height and longer in time at the sensors than they are after the catalyst. These dynamics can be modelled using differential equations. Both mixing dynamics and time delay are constant for modelling a raw gas analyser system, since the flow in that system is constant. In the exhaust system of the car, however, the parameters depend on the exhaust volume flow. For gasoline cars, the variation in overall transport time may be more than 6 s. It is shown in this paper how all these processes can be described by invertible mathematical models with the focus on the more complex case of the car's exhaust system. Inversion means that the sharp emission signal at the catalyst out location can be

  8. Describing Comprehension: Teachers' Observations of Students' Reading Comprehension

    ERIC Educational Resources Information Center

    Vander Does, Susan Lubow

    2012-01-01

    Teachers' observations of student performance in reading are abundant and insightful but often remain internal and unarticulated. As a result, such observations are an underutilized and undervalued source of data. Given the gaps in knowledge about students' reading comprehension that exist in formal assessments, the frequent calls for teachers'…

  9. Can the Dupuit-Thiem equation accurately describe the flow pattern induced by injection in a laboratory scale aquifer-well system?

    NASA Astrophysics Data System (ADS)

    Bonilla, Jose; Kalwa, Fritz; Händel, Falk; Binder, Martin; Stefan, Catalin

    2016-04-01

    The Dupuit-Thiem equation is normally used to assess flow towards a pumping well in unconfined aquifers under steady-state conditions. For the formulation of the equation it is assumed that flow is laminar, radial and horizontal towards the well. It is well known that these assumptions are not met in the vicinity of the well; some authors restrict the application of the equation only to a radius larger than 1.5-fold the aquifer thickness. In this study, the equation accuracy to predict the pressure head is evaluated as a simple and quick analytical method to describe the flow pattern for different injection rates in the LSAW. A laboratory scale aquifer-well system (LSAW) was implemented to study the aquifer recharge through wells. The LSAW consists of a 1.0 m-diameter tank with a height of 1.1 meters, filled with sand and a screened well in the center with a diameter of 0.025 m. A regulated outflow system establishes a controlled water level at the tank wall to simulate various aquifer thicknesses. The pressure head at the bottom of the tank along one axis can be measured to assess the flow profile every 0.1 m between the well and the tank wall. In order to evaluate the accuracy of the Dupuit-Thiem equation, a combination of different injection rates and aquifer thicknesses were simulated in the LSAW. Contrary to what was expected (significant differences between the measured and calculated pressure heads in the well), the absolute difference between the calculated and measured pressure head is less than 10%. Beside this, the highest differences are not observed in the well itself, but in the near proximity of it, at a radius of 0.1 m. The results further show that the difference between the calculated and measured pressure heads tends to decrease with higher flow rates. Despite its limitations (assumption of laminar and horizontal flow throughout the whole aquifer), the Dupuit-Thiem equation is considered to accurately represent the flow system in the LSAW.

  10. A geometric sequence that accurately describes allowed multiple conductance levels of ion channels: the "three-halves (3/2) rule".

    PubMed Central

    Pollard, J R; Arispe, N; Rojas, E; Pollard, H B

    1994-01-01

    Ion channels can express multiple conductance levels that are not integer multiples of some unitary conductance, and that interconvert among one another. We report here that for 26 different types of multiple conductance channels, all allowed conductance levels can be calculated accurately using the geometric sequence gn = g(o) (3/2)n, where gn is a conductance level and n is an integer > or = 0. We refer to this relationship as the "3/2 Rule," because the value of any term in the sequence of conductances (gn) can be calculated as 3/2 times the value of the preceding term (gn-1). The experimentally determined average value for "3/2" is 1.491 +/- 0.095 (sample size = 37, average +/- SD). We also verify the choice of a 3/2 ratio on the basis of error analysis over the range of ratio values between 1.1 and 2.0. In an independent analysis using Marquardt's algorithm, we further verified the 3/2 ratio and the assignment of specific conductances to specific terms in the geometric sequence. Thus, irrespective of the open time probability, the allowed conductance levels of these channels can be described accurately to within approximately 6%. We anticipate that the "3/2 Rule" will simplify description of multiple conductance channels in a wide variety of biological systems and provide an organizing principle for channel heterogeneity and differential effects of channel blockers. PMID:7524712

  11. Provenance of things - describing geochemistry observation workflows using PROV-O

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.; Car, N. J.

    2015-12-01

    Geochemistry observations typically follow a complex preparation process after sample retrieval from the field. Description of these are required to allow readers and other data users to assess the reliability of the data produced, and to ensure reproducibility. While laboratory notebooks are used for private record-keeping, and laboratory information systems (LIMS) on a facility basis, this data is not generally published, and there are no standard formats for transfer. And while there is some standardization of workflows, this is often scoped to a lab, or an instrument. New procedures and workflows are being developed continually - in fact this is a key expectation in the development of the science. Thus formalization of the description of sample preparation and observations must be both rigorous and flexible. We have been exploring the use of the W3C Provenance model (PROV) to capture complete traces, including both the real world things and the data generated. PROV has a core data model that distinguishes between entities, agents and activities involved in producing a piece of data or thing in the world. While the design of PROV was primarily conditioned by stories concerning information resources, application is not restricted to the production of digital or information assets. PROV allowing a comprehensive trace of predecessor entities and transformations at any level of detail. In this paper we demonstrate the use of PROV for describing specimens managed for scientific observations. Two examples are considered: a geological sample which undergoes a typical preparation process for measurements of the concentration of a particular chemical substance, and the collection, taxonomic classification and eventual publication of an insect specimen. PROV enables the material that goes into the instrument to be linked back to the sample retrieved in the field. This complements the IGSN system, which focuses on registration of field sample identity to support the

  12. Why the Big Bang Model Cannot Describe the Observed Universe Having Pressure and Radiation

    NASA Astrophysics Data System (ADS)

    Mitra, Abhas

    It has been recently shown that, since in general relativity (GR), given one time label t, one can choose any other time label t → t* = f(t), the pressure of a homogeneous and isotropic fluid is intrinsically zero (Mitra, Astrophys. Sp. Sc. 333, 351, 2011). Here we explore the physical reasons for the inevitability of this mathe-matical result. The essential reason is that the Weyl Postulate assumes that the test particles in a homogene-ous and isotropic spacetime undergo pure geodesic motion without any collisions amongst themselves. Such an assumed absence of collisions corresponds to the absence of any intrinsic pressure. Accordingly, the "Big Bang Model" (BBM) which assumes that the cosmic fluid is not only continuous but also homogeneous and isotropic intrinsically corresponds to zero pressure and hence zero temperature. It can be seen that this result also follows from the relevant general relativistic first law of thermodynamics (Mitra, Found. Phys. 41, 1454, 2011). Therefore, the ideal BBM cannot describe the physical universe having pressure, temperature and ra-diation. Consequently, the physical universe may comprise matter distributed in discrete non-continuous lumpy fashion (as observed) rather than in the form of a homogeneous continuous fluid. The intrinsic ab-sence of pressure in the "Big Bang Model" also rules out the concept of a "Dark Energy".

  13. Describing Profiles of Instructional Practice: A New Approach to Analyzing Classroom Observation Data

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Kieffer, Michael J.

    2015-01-01

    The authors outline the application of latent class analysis (LCA) to classroom observational instruments. LCA offers diagnostic information about teachers' instructional strengths and weaknesses, along with estimates of measurement error for individual teachers, while remaining relatively straightforward to implement and interpret. It is…

  14. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    SciTech Connect

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-10-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  15. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  16. PIC simulations of a three component plasma described by Kappa distribution functions as observed in Saturn's magnetosphere

    NASA Astrophysics Data System (ADS)

    Barbosa, Marcos; Alves, Maria Virginia; Simões Junior, Fernando

    2016-04-01

    In plasmas out of thermodynamic equilibrium the particle velocity distribution can be described by the so called Kappa distribution. These velocity distribution functions are a generalization of the Maxwellian distribution. Since 1960, Kappa velocity distributions were observed in several regions of interplanetary space and astrophysical plasmas. Using KEMPO1 particle simulation code, modified to introduce Kappa distribution functions as initial conditions for particle velocities, the normal modes of propagation were analyzed in a plasma containing two species of electrons with different temperatures and densities and ions as a third specie.This type of plasma is usually found in magnetospheres such as in Saturn. Numerical solutions for the dispersion relation for such a plasma predict the presence of an electron-acoustic mode, besides the Langmuir and ion-acoustic modes. In the presence of an ambient magnetic field, the perpendicular propagation (Bernstein mode) also changes, as compared to a Maxwellian plasma, due to the Kappa distribution function. Here results for simulations with and without external magnetic field are presented. The parameters for the initial conditions in the simulations were obtained from the Cassini spacecraft data. Simulation results are compared with numerical solutions of the dispersion relation obtained in the literature and they are in good agreement.

  17. Extracting Accurate and Precise Topography from Lroc Narrow Angle Camera Stereo Observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Speyerer, E. J.; Robinson, M. S.; LROC Team

    2016-06-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that acquire meter scale imaging. Stereo observations are acquired by imaging from two or more orbits, including at least one off-nadir slew. Digital terrain models (DTMs) generated from the stereo observations are controlled to Lunar Orbiter Laser Altimeter (LOLA) elevation profiles. With current processing methods, digital terrain models (DTM) have absolute accuracies commensurate than the uncertainties of the LOLA profiles (~10 m horizontally and ~1 m vertically) and relative horizontal and vertical precisions better than the pixel scale of the DTMs (2 to 5 m). The NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics, enabling detailed characterization of large geomorphic features and providing a key resource for future exploration planning. Currently, two percent of the lunar surface is imaged in NAC stereo and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur on all the terrestrial planets.

  18. OBSERVING SIMULATED PROTOSTARS WITH OUTFLOWS: HOW ACCURATE ARE PROTOSTELLAR PROPERTIES INFERRED FROM SEDs?

    SciTech Connect

    Offner, Stella S. R.; Robitaille, Thomas P.; Hansen, Charles E.; Klein, Richard I.; McKee, Christopher F.

    2012-07-10

    The properties of unresolved protostars and their local environment are frequently inferred from spectral energy distributions (SEDs) using radiative transfer modeling. In this paper, we use synthetic observations of realistic star formation simulations to evaluate the accuracy of properties inferred from fitting model SEDs to observations. We use ORION, an adaptive mesh refinement (AMR) three-dimensional gravito-radiation-hydrodynamics code, to simulate low-mass star formation in a turbulent molecular cloud including the effects of protostellar outflows. To obtain the dust temperature distribution and SEDs of the forming protostars, we post-process the simulations using HYPERION, a state-of-the-art Monte Carlo radiative transfer code. We find that the ORION and HYPERION dust temperatures typically agree within a factor of two. We compare synthetic SEDs of embedded protostars for a range of evolutionary times, simulation resolutions, aperture sizes, and viewing angles. We demonstrate that complex, asymmetric gas morphology leads to a variety of classifications for individual objects as a function of viewing angle. We derive best-fit source parameters for each SED through comparison with a pre-computed grid of radiative transfer models. While the SED models correctly identify the evolutionary stage of the synthetic sources as embedded protostars, we show that the disk and stellar parameters can be very discrepant from the simulated values, which is expected since the disk and central source are obscured by the protostellar envelope. Parameters such as the stellar accretion rate, stellar mass, and disk mass show better agreement, but can still deviate significantly, and the agreement may in some cases be artificially good due to the limited range of parameters in the set of model SEDs. Lack of correlation between the model and simulation properties in many individual instances cautions against overinterpreting properties inferred from SEDs for unresolved protostellar

  19. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our

  20. A conceptual model describing the fate of sulfadiazine and its metabolites observed in manure-amended soils.

    PubMed

    Zarfl, Christiane; Klasmeier, Jörg; Matthies, Michael

    2009-10-01

    Sulfadiazine (SDZ) belongs to the chemical class of sulfonamides, one of the most important groups of antibiotics applied in animal husbandry in Europe. These antibiotics end up in the soil after manure from treated animals is applied as fertilizer. They can inhibit soil microbial functions and enhance the spread of resistance genes among soil microorganisms. In order to assess the exposure of soil microorganisms to SDZ, a conceptual kinetic model for the prediction of temporally resolved antibiotic concentrations in soil was developed. The model includes transformation reactions, reversible sequestration and the formation of non-extractable residues (NER) from SDZ and its main metabolites N(4)-acetyl-sulfadiazine (N-ac-SDZ) and 4-hydroxy-sulfadiazine (OH-SDZ). The optimum model structure and rate constants of SDZ kinetics and its metabolites were determined by fitting different model alternatives to sequential extraction data of a manure-amended Cambisol soil. N-ac-SDZ is degraded to SDZ with a half-life of 4d, whereas OH-SDZ is not. Though, based on the available data, the hydroxylation of SDZ seems to be negligible, it is still included in the model structure since this process has been observed in recent studies. Sequestration into a residual fraction has similar kinetics for SDZ, N-ac-SDZ and OH-SDZ and is one order of magnitude faster than the reverse translocation. The irreversible formation of NER is restricted to SDZ and OH-SDZ. The model shows good agreement when applied to extraction data measured independently for a Luvisol soil. The combination of sequential extraction data and the conceptual kinetic model enables us to gain further insight into the long-term fate and exposure of sulfonamides in soil.

  1. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  2. When continuous observations just won't do: developing accurate and efficient sampling strategies for the laying hen.

    PubMed

    Daigle, Courtney L; Siegford, Janice M

    2014-03-01

    Continuous observation is the most accurate way to determine animals' actual time budget and can provide a 'gold standard' representation of resource use, behavior frequency, and duration. Continuous observation is useful for capturing behaviors that are of short duration or occur infrequently. However, collecting continuous data is labor intensive and time consuming, making multiple individual or long-term data collection difficult. Six non-cage laying hens were video recorded for 15 h and behavioral data collected every 2 s were compared with data collected using scan sampling intervals of 5, 10, 15, 30, and 60 min and subsamples of 2 second observations performed for 10 min every 30 min, 15 min every 1 h, 30 min every 1.5 h, and 15 min every 2 h. Three statistical approaches were used to provide a comprehensive analysis to examine the quality of the data obtained via different sampling methods. General linear mixed models identified how the time budget from the sampling techniques differed from continuous observation. Correlation analysis identified how strongly results from the sampling techniques were associated with those from continuous observation. Regression analysis identified how well the results from the sampling techniques were associated with those from continuous observation, changes in magnitude, and whether a sampling technique had bias. Static behaviors were well represented with scan and time sampling techniques, while dynamic behaviors were best represented with time sampling techniques. Methods for identifying an appropriate sampling strategy based upon the type of behavior of interest are outlined and results for non-caged laying hens are presented.

  3. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  4. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  5. Towards a standard framework to describe behaviours in the common-sloth (Bradypus variegatus Schinz, 1825): novel interactions data observed in distinct fragments of the Atlantic forest, Brazil.

    PubMed

    Silva, S M; Clozato, C L; Moraes-Barros, N; Morgante, J S

    2013-08-01

    The common three-toed sloth is a widespread species, but the location and the observation of its individuals are greatly hindered by its biological features. Their camouflaged pelage, its slow and quiet movements, and the strictly arboreal habits resulted in the publication of sparse, fragmented and not patterned information on the common sloth behaviour. Thus, herein we propose an updated standardized behavioural categories' framework to the study of the species. Furthermore we describe two never reported interaction behaviours: a probable mating / courtship ritual between male and female; and apparent recognition behaviour between two males. Finally we highlight the contribution of small-duration field works in this elusive species ethological study.

  6. Simple Waveforms, Simply Described

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2008-01-01

    Since the first Lazarus Project calculations, it has been frequently noted that binary black hole merger waveforms are 'simple.' In this talk we examine some of the simple features of coalescence and merger waveforms from a variety of binary configurations. We suggest an interpretation of the waveforms in terms of an implicit rotating source. This allows a coherent description, of both the inspiral waveforms, derivable from post-Newtonian(PN) calculations, and the numerically determined merger-ringdown. We focus particularly on similarities in the features of various Multipolar waveform components Generated by various systems. The late-time phase evolution of most L these waveform components are accurately described with a sinple analytic fit. We also discuss apparent relationships among phase and amplitude evolution. Taken together with PN information, the features we describe can provide an approximate analytic description full coalescence wavefoRms. complementary to other analytic waveforns approaches.

  7. Importance of Accurate Liquid Water Path for Estimation of Solar Radiation in Warm Boundary Layer Clouds: An Observational Study

    SciTech Connect

    Sengupta, Manajit; Clothiaux, Eugene E.; Ackerman, Thomas P.; Kato, Seiji; Min, Qilong

    2003-09-15

    A one-year observational study of overcast boundary layer stratus at the U.S. Department of Energy Atmospheric Radiation Measurement Program Southern Great Plains site illustrates that surface radiation is primarily sensitive to cloud liquid water path, with cloud drop effective radius having a secondary influence. The mean, median and standard deviation of cloud liquid water path and cloud drop effective radius for the dataset are 0.120 mm, 0.101 mm, 0.108 mm, and 7.38 {micro}m, 7.13 {micro}m, 2.39 {micro}m, respectively. Radiative transfer calculations demonstrate that cloud optical depth and cloud normalized forcing are respectively three and six times as sensitive to liquid water path variations as they are to effective radius variations, when the observed ranges of each of those variables is considered. Overall, there is a 79% correlation between observed and computed surface fluxes when using a fixed effective radius of 7.5 {micro}m and observed liquid water paths in the calculations. One conclusion from this study is that measurement of the indirect aerosol effect will be problematic at the site, as variations in cloud liquid water path will most likely mask effects of variations in particle size.

  8. Relationship between bone mineral density and syndrome types described in traditional chinese medicine in chronic obstructive pulmonary disease: a preliminary clinical observation.

    PubMed

    Wang, Gang; Li, Ting-Qian; Mao, Bing; Wang, Lei; Wang, Lin; Wang, Zeng-Li; Chang, Jing; Xiong, Ze-Yu; Yang, Ding-Zhuo

    2005-01-01

    Osteoporosis is a common finding following chronic obstructive pulmonary disease (COPD), but there are few reports on the relationship between bone mineral density (BMD) and the syndrome types described in traditional Chinese medicine (TCM) in patients with COPD. A cross-sectional medical survey was used in this study. Twenty-six male patients with COPD and 26 age-matched male healthy subjects were recruited. The symptom questionnaire survey of TCM was implemented, and thereafter the COPD patients were divided into two subgroups: type of deficiency of the lung and spleen (TDLS) and type of deficiency of the lung, spleen and kidney (TDLSK). BMD of lumbar spine (L2-4), non-dominant femoral neck (Neck), Ward's triangle (Ward's), and great trochanter (Troch) were measured by dual-energy x-ray absorptiometry. In addition, the other bone turnover markers were also examined. The results showed that BMD was much more decreased in TDLSK than that in TDLS patients (p < 0.05), and BMD in the patients of the TDLS subgroup without symptoms of kidney-vacuity has showed the decreased trend from healthy subjects to TDLS patients. Furthermore, there was a higher incidence of osteoporosis in patients with TDLSK compared with that in TDLS (p < 0.05, OR > 2.0). Therefore, the data suggest that: (1) BMD might be a marker more sensitive than the symptom for the diagnosis of kidney-vacuity in COPD patients; (2) the deficiency of kidney would be the key factor of bone mineral loss; and (3) that invigorating the kidney should be performed in the phase of TDLS in COPD patients in advance.

  9. A New Coarse-Grained Model for E. coli Cytoplasm: Accurate Calculation of the Diffusion Coefficient of Proteins and Observation of Anomalous Diffusion

    PubMed Central

    Hasnain, Sabeeha; McClendon, Christopher L.; Hsu, Monica T.; Jacobson, Matthew P.; Bandyopadhyay, Pradipta

    2014-01-01

    A new coarse-grained model of the E. coli cytoplasm is developed by describing the proteins of the cytoplasm as flexible units consisting of one or more spheres that follow Brownian dynamics (BD), with hydrodynamic interactions (HI) accounted for by a mean-field approach. Extensive BD simulations were performed to calculate the diffusion coefficients of three different proteins in the cellular environment. The results are in close agreement with experimental or previously simulated values, where available. Control simulations without HI showed that use of HI is essential to obtain accurate diffusion coefficients. Anomalous diffusion inside the crowded cellular medium was investigated with Fractional Brownian motion analysis, and found to be present in this model. By running a series of control simulations in which various forces were removed systematically, it was found that repulsive interactions (volume exclusion) are the main cause for anomalous diffusion, with a secondary contribution from HI. PMID:25180859

  10. CLARREO Cornerstone of the Earth Observing System: Measuring Decadal Change Through Accurate Emitted Infrared and Reflected Solar Spectra and Radio Occultation

    NASA Technical Reports Server (NTRS)

    Sandford, Stephen P.

    2010-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is one of four Tier 1 missions recommended by the recent NRC Decadal Survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that are used to enable validated long-term climate projections that become the foundation for informed decisions on mitigation and adaptation policies that address the effects of climate change on society. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change observations that are sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. These same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. For the first time CLARREO will make highly accurate, global, SI-traceable decadal change observations sensitive to the most critical, but least understood, climate forcings, responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy levels are determined so that climate trend signals can be detected against a background of naturally occurring variability. Climate system natural variability therefore determines what level of accuracy is overkill, and what level is critical to obtain. In this sense, the CLARREO mission requirements are considered optimal from a science value perspective. The accuracy for decadal change traceability to SI standards includes uncertainties associated with instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in

  11. Towards an Accurate Alignment of the VLBI Frame and the Future Gaia Optical Frame: Global VLBI Imaging Observations of a Sample of Candidate Sources for this Alignment

    NASA Astrophysics Data System (ADS)

    Bourda, G.; Collioud, A.; Charlot, P.; Porcas, R.; Garrington, S.

    2012-12-01

    The space astrometry mission Gaia will construct a dense optical QSO-based celestial reference frame. For consistency between optical and radio positions, it will be important to align the Gaia and VLBI frames with the highest accuracy. However, the number of quasars that are bright at optical wavelengths (for the best position accuracy with Gaia), that have a compact core (to be detectable on VLBI scales), and that do not exhibit complex structures (to ensure a good astrometric quality) was found to be limited. It was then realized that the densification of the list of such objects was necessary. Therefore, we initiated a multi-step VLBI observational project, dedicated to finding additional suitable radio sources for aligning the two frames. The sample consists of ~450 optically- bright weak extragalactic radio sources, which have been selected by cross-correlating optical and radio catalogs. The initial observations, aimed at checking whether these sources are detectable with VLBI, and conducted with the European VLBI Network (EVN) in 2007, showed an excellent ~90% detection rate. The second step, dedicated to identifying the most point-like sources of the sample, by imaging their VLBI structures, was initiated in 2008. Approximately 25% of the detected targets were observed with the Global VLBI array (EVN+VLBA; Very Long Baseline Array) during a pilot imaging experiment, revealing that approximately 50% of them are point-like sources on VLBI scales. The rest of the sources were observed during three additional imaging experiments in March 2010, November 2010, and March 2011. In this paper, we present the results of these imaging campaigns and report plans for the final stage of the project, which will be dedicated to accurately measuring the VLBI position of the most point-like sources.

  12. Describe Your Favorite Teacher.

    ERIC Educational Resources Information Center

    Dill, Isaac; Dill, Vicky

    1993-01-01

    A third grader describes Ms. Gonzalez, his favorite teacher, who left to accept a more lucrative teaching assignment. Ms. Gonzalez' butterflies unit covered everything from songs about social butterflies to paintings of butterfly wings, anatomy studies, and student haiku poems and biographies. Students studied biology by growing popcorn plants…

  13. How Mathematics Describes Life

    NASA Astrophysics Data System (ADS)

    Teklu, Abraham

    2017-01-01

    The circle of life is something we have all heard of from somewhere, but we don't usually try to calculate it. For some time we have been working on analyzing a predator-prey model to better understand how mathematics can describe life, in particular the interaction between two different species. The model we are analyzing is called the Holling-Tanner model, and it cannot be solved analytically. The Holling-Tanner model is a very common model in population dynamics because it is a simple descriptor of how predators and prey interact. The model is a system of two differential equations. The model is not specific to any particular set of species and so it can describe predator-prey species ranging from lions and zebras to white blood cells and infections. One thing all these systems have in common are critical points. A critical point is a value for both populations that keeps both populations constant. It is important because at this point the differential equations are equal to zero. For this model there are two critical points, a predator free critical point and a coexistence critical point. Most of the analysis we did is on the coexistence critical point because the predator free critical point is always unstable and frankly less interesting than the coexistence critical point. What we did is consider two regimes for the differential equations, large B and small B. B, A, and C are parameters in the differential equations that control the system where B measures how responsive the predators are to change in the population, A represents predation of the prey, and C represents the satiation point of the prey population. For the large B case we were able to approximate the system of differential equations by a single scalar equation. For the small B case we were able to predict the limit cycle. The limit cycle is a process of the predator and prey populations growing and shrinking periodically. This model has a limit cycle in the regime of small B, that we solved for

  14. New Described Dermatological Disorders

    PubMed Central

    Cevirgen Cemil, Bengu; Keseroglu, Havva Ozge; Kaya Akis, Havva

    2014-01-01

    Many advances in dermatology have been made in recent years. In the present review article, newly described disorders from the last six years are presented in detail. We divided these reports into different sections, including syndromes, autoinflammatory diseases, tumors, and unclassified disease. Syndromes included are “circumferential skin creases Kunze type” and “unusual type of pachyonychia congenita or a new syndrome”; autoinflammatory diseases include “chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature (CANDLE) syndrome,” “pyoderma gangrenosum, acne, and hidradenitis suppurativa (PASH) syndrome,” and “pyogenic arthritis, pyoderma gangrenosum, acne, and hidradenitis suppurativa (PAPASH) syndrome”; tumors include “acquired reactive digital fibroma,” “onychocytic matricoma and onychocytic carcinoma,” “infundibulocystic nail bed squamous cell carcinoma,” and “acral histiocytic nodules”; unclassified disorders include “saurian papulosis,” “symmetrical acrokeratoderma,” “confetti-like macular atrophy,” and “skin spicules,” “erythema papulosa semicircularis recidivans.” PMID:25243162

  15. THE HYPERFINE STRUCTURE OF THE ROTATIONAL SPECTRUM OF HDO AND ITS EXTENSION TO THE THz REGION: ACCURATE REST FREQUENCIES AND SPECTROSCOPIC PARAMETERS FOR ASTROPHYSICAL OBSERVATIONS

    SciTech Connect

    Cazzoli, Gabriele; Lattanzi, Valerio; Puzzarini, Cristina; Alonso, José Luis; Gauss, Jürgen

    2015-06-10

    The rotational spectrum of the mono-deuterated isotopologue of water, HD{sup 16}O, has been investigated in the millimeter- and submillimeter-wave frequency regions, up to 1.6 THz. The Lamb-dip technique has been exploited to obtain sub-Doppler resolution and to resolve the hyperfine (hf) structure due to the deuterium and hydrogen nuclei, thus enabling the accurate determination of the corresponding hf parameters. Their experimental determination has been supported by high-level quantum-chemical calculations. The Lamb-dip measurements have been supplemented by Doppler-limited measurements (weak high-J and high-frequency transitions) in order to extend the predictive capability of the available spectroscopic constants. The possibility of resolving hf splittings in astronomical spectra has been discussed.

  16. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  17. Challenges in describing ribosome dynamics

    NASA Astrophysics Data System (ADS)

    Nguyen, Kien; Whitford, Paul Charles

    2017-04-01

    For decades, protein folding and functional dynamics have been described in terms of diffusive motion across an underlying energy landscape. With continued advances in structural biology and high-performance computing, the field is positioned to extend these approaches to large biomolecular assemblies. Through the application of energy landscape techniques to the ribosome, one may work towards establishing a comprehensive description of the dynamics, which will bridge theoretical concepts and experimental observations. In this perspective, we discuss a few of the challenges that will need to be addressed as we extend the application of landscape principles to the ribosome.

  18. Some properties of negative cloud-to-ground flashes from observations of a local thunderstorm based on accurate-stroke-count studies

    NASA Astrophysics Data System (ADS)

    Zhu, Baoyou; Ma, Ming; Xu, Weiwei; Ma, Dong

    2015-12-01

    Properties of negative cloud-to-ground (CG) lightning flashes, in terms of number of strokes per flash, inter-stroke intervals and the relative intensity of subsequent and first strokes, were presented by accurate-stroke-count studies based on all 1085 negative flashes from a local thunderstorm. The percentage of single-stroke flashes and stroke multiplicity evolved significantly during the whole life cycle of the study thunderstorm. The occurrence probability of negative CG flashes decreased exponentially with the increasing number of strokes per flash. About 30.5% of negative CG flashes contained only one stroke and number of strokes per flash averaged 3.3. In a subset of 753 negative multiple-stroke flashes, about 41.4% contained at least one subsequent stroke stronger than the corresponding first stroke. Subsequent strokes tended to decrease in strength with their orders and the ratio of subsequent to first stroke peaks presented a geometric mean value of 0.52. Interestingly, negative CG flashes of higher multiplicity tended to have stronger initial strokes. 2525 inter-stroke intervals showed a more or less log-normal distribution and gave a geometric mean value of 62 ms. For CG flashes of particular multiplicity geometric mean inter-stroke intervals tended to decrease with the increasing number of strokes per flash, while those intervals associated with higher order strokes tended to be larger than those associated with low order strokes.

  19. Masses of the components of SB2 binaries observed with Gaia - III. Accurate SB2 orbits for 10 binaries and masses of HIP 87895

    NASA Astrophysics Data System (ADS)

    Kiefer, F.; Halbwachs, J.-L.; Arenou, F.; Pourbaix, D.; Famaey, B.; Guillout, P.; Lebreton, Y.; Nebot Gómez-Morán, A.; Mazeh, T.; Salomon, J.-B.; Soubiran, C.; Tal-Or, L.

    2016-05-01

    In anticipation of the Gaia astrometric mission, a large sample of spectroscopic binaries has been observed since 2010 with the Spectrographe pour l'Observation des PHénomènes des Intérieurs Stellaires et des Exoplanètes spectrograph at the Haute-Provence Observatory. Our aim is to derive the orbital elements of double-lined spectroscopic binaries (SB2s) with an accuracy sufficient to finally obtain the masses of the components with relative errors as small as 1 per cent when the astrometric measurements of Gaia are taken into account. In this paper, we present the results from five years of observations of 10 SB2 systems with periods ranging from 37 to 881 d. Using the TODMOR algorithm, we computed radial velocities from the spectra, and then derived the orbital elements of these binary systems. The minimum masses of the components are then obtained with an accuracy better than 1.2 per cent for the 10 binaries. Combining the radial velocities with existing interferometric measurements, we derived the masses of the primary and secondary components of HIP 87895 with an accuracy of 0.98 and 1.2 per cent, respectively.

  20. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  1. Accomplishments of the MUSICA project to provide accurate, long-term, global and high-resolution observations of tropospheric {H2O,δD} pairs - a review

    NASA Astrophysics Data System (ADS)

    Schneider, Matthias; Wiegele, Andreas; Barthlott, Sabine; González, Yenny; Christner, Emanuel; Dyroff, Christoph; García, Omaira E.; Hase, Frank; Blumenstock, Thomas; Sepúlveda, Eliezer; Mengistu Tsidu, Gizaw; Takele Kenea, Samuel; Rodríguez, Sergio; Andrey, Javier

    2016-07-01

    In the lower/middle troposphere, {H2O,δD} pairs are good proxies for moisture pathways; however, their observation, in particular when using remote sensing techniques, is challenging. The project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water) addresses this challenge by integrating the remote sensing with in situ measurement techniques. The aim is to retrieve calibrated tropospheric {H2O,δD} pairs from the middle infrared spectra measured from ground by FTIR (Fourier transform infrared) spectrometers of the NDACC (Network for the Detection of Atmospheric Composition Change) and the thermal nadir spectra measured by IASI (Infrared Atmospheric Sounding Interferometer) aboard the MetOp satellites. In this paper, we present the final MUSICA products, and discuss the characteristics and potential of the NDACC/FTIR and MetOp/IASI {H2O,δD} data pairs. First, we briefly resume the particularities of an {H2O,δD} pair retrieval. Second, we show that the remote sensing data of the final product version are absolutely calibrated with respect to H2O and δD in situ profile references measured in the subtropics, between 0 and 7 km. Third, we reveal that the {H2O,δD} pair distributions obtained from the different remote sensors are consistent and allow distinct lower/middle tropospheric moisture pathways to be identified in agreement with multi-year in situ references. Fourth, we document the possibilities of the NDACC/FTIR instruments for climatological studies (due to long-term monitoring) and of the MetOp/IASI sensors for observing diurnal signals on a quasi-global scale and with high horizontal resolution. Fifth, we discuss the risk of misinterpreting {H2O,δD} pair distributions due to incomplete processing of the remote sensing products.

  2. Model describes subsea control dynamics

    SciTech Connect

    Not Available

    1988-02-01

    A mathematical model of the hydraulic control systems for subsea completions and their umbilicals has been developed and applied successfully to Jabiru and Challis field production projects in the Timor Sea. The model overcomes the limitations of conventional linear steady state models and yields for the hydraulic system an accurate description of its dynamic response, including the valve shut-in times and the pressure transients. Results of numerical simulations based on the model are in good agreement with measurements of the dynamic response of the tree valves and umbilicals made during land testing.

  3. A Fibre-Reinforced Poroviscoelastic Model Accurately Describes the Biomechanical Behaviour of the Rat Achilles Tendon

    PubMed Central

    Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna

    2015-01-01

    Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon’s viscoelastic response. In conclusion, this model can capture the repetitive loading and unloading behaviour of intact and healthy Achilles tendons, which is a critical first step towards understanding tendon homeostasis and function as this biomechanical response changes in diseased tendons. PMID:26030436

  4. Five Describing Factors of Dyslexia

    ERIC Educational Resources Information Center

    Tamboer, Peter; Vorst, Harrie C. M.; Oort, Frans J.

    2016-01-01

    Two subtypes of dyslexia (phonological, visual) have been under debate in various studies. However, the number of symptoms of dyslexia described in the literature exceeds the number of subtypes, and underlying relations remain unclear. We investigated underlying cognitive features of dyslexia with exploratory and confirmatory factor analyses. A…

  5. Utilizing prospective sequence analysis of SHH, ZIC2, SIX3 and TGIF in holoprosencephaly probands to describe the parameters limiting the observed frequency of mutant gene×gene interactions.

    PubMed

    Roessler, Erich; Vélez, Jorge I; Zhou, Nan; Muenke, Maximilian

    2012-04-01

    Clinical molecular diagnostic centers routinely screen SHH, ZIC2, SIX3 and TGIF for mutations that can help to explain holoprosencephaly and related brain malformations. Here we report a prospective Sanger sequence analysis of 189 unrelated probands referred to our diagnostic lab for genetic testing. We identified 28 novel unique mutations in this group (15%) and no instances of deleterious mutations in two genes in the same subject. Our result extends that of other diagnostic centers and suggests that among the aggregate 475 prospectively sequenced holoprosencephaly probands there is negligible evidence for direct gene-gene interactions among these tested genes. We model the predictions of the observed mutation frequency in the context of the hypothesis that gene×gene interactions are a prerequisite for forebrain malformations, i.e. the "multiple-hit" hypothesis. We conclude that such a direct interaction would be expected to be rare and that more subtle genetic and environmental interactions are a better explanation for the clinically observed inter- and intra-familial variability.

  6. How to describe disordered structures

    PubMed Central

    Nishio, Kengo; Miyazaki, Takehide

    2016-01-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way. PMID:27064833

  7. How to describe disordered structures

    NASA Astrophysics Data System (ADS)

    Nishio, Kengo; Miyazaki, Takehide

    2016-04-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way.

  8. Evaluation of Geographic Indices Describing Health Care Utilization

    PubMed Central

    Park, Jong Heon

    2017-01-01

    Objectives The accurate measurement of geographic patterns of health care utilization is a prerequisite for the study of geographic variations in health care utilization. While several measures have been developed to measure how accurately geographic units reflect the health care utilization patterns of residents, they have been only applied to hospitalization and need further evaluation. This study aimed to evaluate geographic indices describing health care utilization. Methods We measured the utilization rate and four health care utilization indices (localization index, outflow index, inflow index, and net patient flow) for eight major procedures (coronary artery bypass graft surgery, percutaneous transluminal coronary angioplasty, surgery after hip fracture, knee replacement surgery, caesarean sections, hysterectomy, computed tomography scans, and magnetic resonance imaging scans) according to three levels of geographic units in Korea. Data were obtained from the National Health Insurance database in Korea. We evaluated the associations among the health care utilization indices and the utilization rates. Results In higher-level geographic units, the localization index tended to be high, while the inflow index and outflow index were lower. The indices showed different patterns depending on the procedure. A strong negative correlation between the localization index and the outflow index was observed for all procedures. Net patient flow showed a moderate positive correlation with the localization index and the inflow index. Conclusions Health care utilization indices can be used as a proxy to describe the utilization pattern of a procedure in a geographic unit. PMID:28173689

  9. Observation

    ERIC Educational Resources Information Center

    Helfrich, Shannon

    2016-01-01

    Helfrich addresses two perspectives from which to think about observation in the classroom: that of the teacher observing her classroom, her group, and its needs, and that of the outside observer coming into the classroom. Offering advice from her own experience, she encourages and defends both. Do not be afraid of the disruption of outside…

  10. Observations

    ERIC Educational Resources Information Center

    Joosten, Albert Max

    2016-01-01

    Joosten begins his article by telling us that love and knowledge together are the foundation for our work with children. This combination is at the heart of our observation. With this as the foundation, he goes on to offer practical advice to aid our practice of observation. He offers a "List of Objects of Observation" to help guide our…

  11. Observation

    ERIC Educational Resources Information Center

    Kripalani, Lakshmi A.

    2016-01-01

    The adult who is inexperienced in the art of observation may, even with the best intentions, react to a child's behavior in a way that hinders instead of helping the child's development. Kripalani outlines the need for training and practice in observation in order to "understand the needs of the children and...to understand how to remove…

  12. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  13. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  14. Quantum formalism to describe binocular rivalry.

    PubMed

    Manousakis, Efstratios

    2009-11-01

    On the basis of the general character and operation of the process of perception, a formalism is sought to mathematically describe the subjective or abstract/mental process of perception. It is shown that the formalism of orthodox quantum theory of measurement, where the observer plays a key role, is a broader mathematical foundation which can be adopted to describe the dynamics of the subjective experience. The mathematical formalism describes the psychophysical dynamics of the subjective or cognitive experience as communicated to us by the subject. Subsequently, the formalism is used to describe simple perception processes and, in particular, to describe the probability distribution of dominance duration obtained from the testimony of subjects experiencing binocular rivalry. Using this theory and parameters based on known values of neuronal oscillation frequencies and firing rates, the calculated probability distribution of dominance duration of rival states in binocular rivalry under various conditions is found to be in good agreement with available experimental data. This theory naturally explains an observed marked increase in dominance duration in binocular rivalry upon periodic interruption of stimulus and yields testable predictions for the distribution of perceptual alteration in time.

  15. CRITICAL ELEMENTS IN DESCRIBING AND UNDERSTANDING OUR NATION'S AQUATIC RESOURCES

    EPA Science Inventory

    Despite spending $115 billion per year on environmental actions in the United States, we have only a limited ability to describe the effectiveness of these expenditures. Moreover, after decades of such investments, we cannot accurately describe status and trends in the nation's a...

  16. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  17. Challenges in describing nuclear reactions outcomes at near-barrier energies

    NASA Astrophysics Data System (ADS)

    Dasgupta, M.; Simpson, E. C.; Kalkal, S.; Cook, K. J.; Carter, I. P.; Hinde, D. J.; Luong, D. H.

    2017-01-01

    The properties of light nuclei such as 6Li, 7Li, 9Be and 12C, and their reaction outcomes are known to be strongly influenced by their underlying α-cluster structure. Reaction models do not yet exist to allow accurate predictions of outcomes following a collision of these nuclei with another nucleus. As a result, reaction models within GEANT, and nuclear fusion models do not accurately describe measured products or cross sections. Recent measurements at the Australian National University have shown new reaction modes that lead to breakup of 6Li, 7Li into lighter clusters, again presenting a further challenge to current models. The new observations and subsequent model developments will impact on accurate predictions of reaction outcomes of 12C - a three α-cluster nucleus – that is used in heavy ion therapy.

  18. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  19. [Describing language of spectra and rough set].

    PubMed

    Qiu, Bo; Hu, Zhan-yi; Zhao, Yong-heng

    2002-06-01

    It is the traditional way to analyze spectra by experiences in astronomical field. And until now there has never been a suitable theoretical frame to describe spectra, which is may be owing to small spectra datasets that astronomers can get by low-level instruments. With the high-speed development of telescopes, especially on behalf of LAMOST, a large telescope which can collect more than 20,000 spectra in an observing night, spectra datasets are becoming larger and larger very fast. Facing these voluminous datasets, the traditional spectra-processing way simply depending on experiences becomes unfit. In this paper, we develop a brand-new language--describing language of spectra (DLS) to describe spectra of celestial bodies by defining BE (Basic element). And based on DLS, we introduce the method of RSDA (Rough set and data analysis), which is a technique of data mining. By RSDA we extract some rules of stellar spectra, and this experiment can be regarded as an application of DLS.

  20. [Who really first described lesser blood circulation?].

    PubMed

    Masić, Izet; Dilić, Mirza

    2007-01-01

    Today, at least 740 years since professor and director of the Al Mansouri Hospital in Cairo Ibn al-Nafis (1210-1288), in his paper about pulse described small (pulmonary) blood circulatory system. At the most popular web search engines very often we can find its name, especially in English language. Majority of quotes about Ibn Nefis are on Arabic or Turkish language, although Ibn Nefis discovery is of world wide importance. Author Masić I. (1993) is among rare ones who in some of the indexed journals emphasized of that event, and on that debated also some authors from Great Britain and USA in the respectable magazine Annals of Internal Medicine. Citations in majority mentioning other two "describers" or "discoverers" of pulmonary blood circulation, Michael Servetus (1511-1553), physician and theologist, and William Harvey (1578-1657), which in his paper "Exercitatio anatomica de motu cordis et sanguinis in animalibus" published in 1628 described blood circulatory system. Ibn Nefis is due to its scientific work called "Second Avicenna". Some of his papers, during centuries were translated into Latin, and some published as a reprint in Arabic language. Professor Fuat Sezgin from Frankfurt published a compendium of Ibn Nefis papers in 1997. Also, Masić I. (1997) has published one monography about Ibn Nefis. Importance of Ibn Nefis epochal discovery is the fact that it is solely based on deductive impressions, because his description of the small circulation is not occurred by observation on corps during section. It is known that he did not pay attention to the Galen's theories about blood circulation. His prophecy sentence say: "If I don't know that my work will not last up to ten thousand years after me, I would not write them". Sapient sat.

  1. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  2. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  3. The importance and attainment of accurate absolute radiometric calibration

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1984-01-01

    The importance of accurate absolute radiometric calibration is discussed by reference to the needs of those wishing to validate or use models describing the interaction of electromagnetic radiation with the atmosphere and earth surface features. The in-flight calibration methods used for the Landsat Thematic Mapper (TM) and the Systeme Probatoire d'Observation de la Terre, Haute Resolution visible (SPOT/HRV) systems are described and their limitations discussed. The questionable stability of in-flight absolute calibration methods suggests the use of a radiative transfer program to predict the apparent radiance, at the entrance pupil of the sensor, of a ground site of measured reflectance imaged through a well characterized atmosphere. The uncertainties of such a method are discussed.

  4. Methods for describing illumination colour uniformities

    NASA Astrophysics Data System (ADS)

    Rotscholl, Ingo; Trampert, Klaus; Herrmann, Franziska; Neumann, Cornelius

    2015-02-01

    Optimizing angular or spatial colour homogeneity has become an important task in many general lighting applications and first requires a valid description of illumination colour homogeneity. We analyse different frequently used methods to describe colour distributions in theory and with measurement data. It is described why information about chromaticity coordinates, correlated colour temperature and global chromaticity coordinate distances are not sufficient for describing colour homogeneity perception of light distributions. We present local chromaticity coordinate distances as expandable and easy implementable method for describing colour homogeneity distributions that is adaptable to the field of view by only one intuitive, physiological meaningful parameter.

  5. How Do Children Describe Spatial Relationships?

    ERIC Educational Resources Information Center

    Cox, M. V.; Richardson, J. Ryder

    1985-01-01

    Describes a study of children's production of locative prepositions in order to test H. Clark's hypotheses regarding the acquisition of spatial terms. Subjects were required to describe the spatial arrangement of two balls arranged in each of three spatial dimensions. (SED)

  6. Systematically describing gross lesions in corals

    USGS Publications Warehouse

    Work, T.; Aeby, G.

    2006-01-01

    Many coral diseases are characterized based on gross descriptions and, given the lack or difficulty of applying existing laboratory tools to understanding causes of coral diseases, most new diseases will continued to be described based on appearance in the field. Unfortunately, many existing descriptions of coral disease are ambiguous or open to subjective interpretation, making comparisons between oceans problematic. One reason for this is that the process of describing lesions is often confused with that of assigning causality for the lesion. However, causality is usually something not obtained in the field and requires additional laboratory tests. Because a concise and objective morphologic description provides the foundation for a case definition of any disease, there is a need for a consistent and standardized process to describe lesions of corals that focuses on morphology. We provide a framework to systematically describe and name diseases in corals involving 4 steps: (1) naming the disease, (2) describing the lesion, (3) formulating a morphologic diagnosis and (4) formulating an etiologic diagnosis. This process focuses field investigators on describing what they see and separates the process of describing a lesion from that of inferring causality, the latter being more appropriately done using laboratory techniques.

  7. Venus general atmosphere circulation described by Pioneer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The predominant weather pattern for Venus is described. Wind directions and wind velocities are given. Possible driving forces of the winds are presented and include solar heating, planetary rotation, and the greenhouse effect.

  8. Recently described neoplasms of the sinonasal tract.

    PubMed

    Bishop, Justin A

    2016-03-01

    Surgical pathology of the sinonasal region (i.e., nasal cavity and the paranasal sinuses) is notoriously difficult, due in part to the remarkable diversity of neoplasms that may be encountered in this area. In addition, a number of neoplasms have been only recently described in the sinonasal tract, further compounding the difficulty for pathologists who are not yet familiar with them. This manuscript will review the clinicopathologic features of some of the recently described sinonasal tumor types: NUT midline carcinoma, HPV-related carcinoma with adenoid cystic-like features, SMARCB1 (INI-1) deficient sinonasal carcinoma, biphenotypic sinonasal sarcoma, and adamantinoma-like Ewing family tumor.

  9. Physical Fields Described By Maxwell's Equations

    SciTech Connect

    Ahmetaj, Skender; Veseli, Ahmet; Jashari, Gani

    2007-04-23

    Fields that satisfy Maxwell's equations of motion are analyzed. Investigation carried out in this work, shows that the free electromagnetic field, spinor Dirac's field without mass, spinor Dirac's field with mass, and some other fields are described by the same variational formulation. The conditions that a field be described by Maxwell's equations of motion are given in this work, and some solutions of these conditions are also given. The question arises, which physical objects are formulated by the same or analogous equations of physics.

  10. Is the Water Heating Curve as Described?

    ERIC Educational Resources Information Center

    Riveros, H. G.; Oliva, A. I.

    2008-01-01

    We analysed the heating curve of water which is described in textbooks. An experiment combined with some simple heat transfer calculations is discussed. The theoretical behaviour can be altered by changing the conditions under which the experiment is modelled. By identifying and controlling the different parameters involved during the heating…

  11. USING TRACERS TO DESCRIBE NAPL HETEROGENEITY

    EPA Science Inventory

    Tracers are frequently used to estimate both the average travel time for water flow through the tracer swept volume and NAPL saturation. The same data can be used to develop a statistical distribution describing the hydraulic conductivity in the sept volume and a possible distri...

  12. Describing Technological Paradigm Transitions: A Methodological Exploration.

    ERIC Educational Resources Information Center

    Wallace, Danny P.; Van Fleet, Connie

    1997-01-01

    Presents a humorous treatment of the "sessio taurino" (or humanistic inquiry) technique for describing changes in technological models. The fundamental tool of "sessio taurino" is a loosely-structured event known as the session, which is of indeterminate length, involves a flexible number of participants, and utilizes a…

  13. Using Percentages to Describe and Calculate Change

    ERIC Educational Resources Information Center

    Price, Beth; Steinle, Vicki; Stacey, Kaye; Gvozdenko, Eugene

    2014-01-01

    This study reports on the use of formative, diagnostic online assessments for the topic percentages. Two new item formats (drag-drop and slider) are described. About one-third of the school students (Years 7 to 9) could, using a slider, estimate "80% more than" a given length, in contrast with over two-thirds who could estimate "90%…

  14. LiveDescribe: Can Amateur Describers Create High-Quality Audio Description?

    ERIC Educational Resources Information Center

    Branje, Carmen J.; Fels, Deborah I.

    2012-01-01

    Introduction: The study presented here evaluated the usability of the audio description software LiveDescribe and explored the acceptance rates of audio description created by amateur describers who used LiveDescribe to facilitate the creation of their descriptions. Methods: Twelve amateur describers with little or no previous experience with…

  15. Describing Spirituality at the End of Life.

    PubMed

    Stephenson, Pam Shockey; Berry, Devon M

    2015-09-01

    Spirituality is salient to persons nearing the end of life (EOL). Unfortunately, researchers have not been able to agree on a universal definition of spirituality reducing the effectiveness of spiritual research. To advance spiritual knowledge and build an evidence base, researchers must develop creative ways to describe spirituality as it cannot be explicitly defined. A literature review was conducted to determine the common attributes that comprise the essence of spirituality, thereby creating a common ground on which to base spiritual research. Forty original research articles (2002 to 2012) focusing on EOL and including spiritual definitions/descriptions were reviewed. Analysis identified five attributes that most commonly described the essence of spirituality, including meaning, beliefs, connecting, self-transcendence, and value.

  16. [Open submucosal dissection: first case described].

    PubMed

    Portanova, Michel; Vesco, Eduardo; Morales, Domingo

    2007-01-01

    Endoscopic submucosal dissection is a new treatment, basically for the management of early gastric cancer, it is also a good option for large benign lesions if a "una pieza" resection needs to be performed. However this technique requires not only gastroenterologist with proven technical skill, but also some special devices not necessarily disposables in our country. The present paper describes the case of a patient with a large hyperplastic polyp located in the upper third of the stomach who underwent an open endoscopic submucosal dissection to resect the lesion owing to its size and characteristics. According our knowledge this is the first case in the medical literature describing the use of this technique during an open surgery.

  17. Generating and Describing Affective Eye Behaviors

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Zheng

    The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.

  18. Describing response-event relations: Babel revisited

    PubMed Central

    Lattal, Kennon A.; Poling, Alan D.

    1981-01-01

    The terms used to describe the relations among the three components of contingencies of reinforcement and punishment include many with multiple meanings and imprecise denotation. In particular, usage of the term “contingency” and its variants and acceptance of unsubstantiated functional, rather than procedural, descriptions of response-event relations are especially troublesome in the behavior analysis literature. Clarity seems best served by restricting the term “contingency” to its generic usage and by utilizing procedural descriptions of response-event relations. PMID:22478546

  19. Is an eclipse described in the Odyssey?

    PubMed Central

    Baikouzis, Constantino; Magnasco, Marcelo O.

    2008-01-01

    Plutarch and Heraclitus believed a certain passage in the 20th book of the Odyssey (“Theoclymenus's prophecy”) to be a poetic description of a total solar eclipse. In the late 1920s, Schoch and Neugebauer computed that the solar eclipse of 16 April 1178 B.C.E. was total over the Ionian Islands and was the only suitable eclipse in more than a century to agree with classical estimates of the decade-earlier sack of Troy around 1192–1184 B.C.E. However, much skepticism remains about whether the verses refer to this, or any, eclipse. To contribute to the issue independently of the disputed eclipse reference, we analyze other astronomical references in the Epic, without assuming the existence of an eclipse, and search for dates matching the astronomical phenomena we believe they describe. We use three overt astronomical references in the epic: to Boötes and the Pleiades, Venus, and the New Moon; we supplement them with a conjectural identification of Hermes's trip to Ogygia as relating to the motion of planet Mercury. Performing an exhaustive search of all possible dates in the span 1250–1115 B.C., we looked to match these phenomena in the order and manner that the text describes. In that period, a single date closely matches our references: 16 April 1178 B.C.E. We speculate that these references, plus the disputed eclipse reference, may refer to that specific eclipse. PMID:18577587

  20. Describing Ammonite shape using Fourier analysis

    NASA Astrophysics Data System (ADS)

    El Hariri, Khadija; Bachnou, Ali

    2004-06-01

    A number of geometrical methods for comparing shapes have been developed recently. This paper explores two approaches for analyzing the morphological variation of some invertebrate fossil characteristics such as rib pattern and whorl section shape: (1) landmarks analysis (Procrustes methods), (2) mathematical modeling by Fourier analysis. The morphometric analysis has been applied to a faunal sequence of Graphoceratidae (Ammonitina) taken in the central High Atlas. In the first stage of analysis, we used landmarks to describe shapes. This calculation is done through the "Procrustes" program whose results generate phenetic trees with a typically morphological significance and whose nodes convey some degrees of morphological similarities among the different taxa analyzed. In the second stage of describing ammonite shape, a new approach will offer us a valuable morphologic descriptor by modeling the whorl section. It allows for transcription in the form and an equation will be used for descriptive variables which represent necessary data for an analysis in principal components. Factorial planes then correspond to morphological space within which the analyzed individuals are distributed. In this way, it is possible to determine the groups for which whorl section morphologies show similarities. These two morphometric techniques offer a valuable tool for the analysis and comparison of morphologies for both rib shape and whorl section. This allows one not only to analyze morphological diversity in Graphoceratidae with more reliability, but also to highlight the most important convergences among the analyzed taxa.

  1. Kinetic approach for describing biological systems

    NASA Astrophysics Data System (ADS)

    Aristov, V. V.; Ilyin, O. V.

    2016-11-01

    We attempt to consider a biological structure as an open nonequilibrium system the properties of which can be described on the basis of kinetic approach with the help of appropriate kinetic equations. This approach allows us to evaluate in principle scales of sizes and to connect these values to the inner characteristics of the processes of kinetic interaction and advection. One can compare the results with some empirical data concerning these characteristics for bio-systems, in particular mammals, and also for some parts of the systems, say sizes of green leaves. A sense of the nonequilibrium entropy as a measure of complexity of bio-organisms is discussed. Besides the estimations of bio-systems on a global scale, possible methods to describe restricted regions (associated e.g. with living cells) as nonequilibrium open structure with specific boundaries are also discussed. A new boundary 1D problem is formulated and solved for kinetic equations with the membrane-like boundaries conditions. Non-classical transport properties in the system are found.

  2. Is an eclipse described in the Odyssey?

    PubMed

    Baikouzis, Constantino; Magnasco, Marcelo O

    2008-07-01

    Plutarch and Heraclitus believed a certain passage in the 20th book of the Odyssey ("Theoclymenus's prophecy") to be a poetic description of a total solar eclipse. In the late 1920s, Schoch and Neugebauer computed that the solar eclipse of 16 April 1178 B.C.E. was total over the Ionian Islands and was the only suitable eclipse in more than a century to agree with classical estimates of the decade-earlier sack of Troy around 1192-1184 B.C.E. However, much skepticism remains about whether the verses refer to this, or any, eclipse. To contribute to the issue independently of the disputed eclipse reference, we analyze other astronomical references in the Epic, without assuming the existence of an eclipse, and search for dates matching the astronomical phenomena we believe they describe. We use three overt astronomical references in the epic: to Boötes and the Pleiades, Venus, and the New Moon; we supplement them with a conjectural identification of Hermes's trip to Ogygia as relating to the motion of planet Mercury. Performing an exhaustive search of all possible dates in the span 1250-1115 B.C., we looked to match these phenomena in the order and manner that the text describes. In that period, a single date closely matches our references: 16 April 1178 B.C.E. We speculate that these references, plus the disputed eclipse reference, may refer to that specific eclipse.

  3. Describing Story Evolution from Dynamic Information Streams

    SciTech Connect

    Rose, Stuart J.; Butner, R. Scott; Cowley, Wendy E.; Gregory, Michelle L.; Walker, Julia

    2009-10-12

    Sources of streaming information, such as news syndicates, publish information continuously. Information portals and news aggregators list the latest information from around the world enabling information consumers to easily identify events in the past 24 hours. The volume and velocity of these streams causes information from prior days’ to quickly vanish despite its utility in providing an informative context for interpreting new information. Few capabilities exist to support an individual attempting to identify or understand trends and changes from streaming information over time. The burden of retaining prior information and integrating with the new is left to the skills, determination, and discipline of each individual. In this paper we present a visual analytics system for linking essential content from information streams over time into dynamic stories that develop and change over multiple days. We describe particular challenges to the analysis of streaming information and explore visual representations for showing story change and evolution over time.

  4. Does Guru Granth Sahib describe depression?

    PubMed Central

    Kalra, Gurvinder; Bhui, Kamaldeep; Bhugra, Dinesh

    2013-01-01

    Sikhism is a relatively young religion, with Guru Granth Sahib as its key religious text. This text describes emotions in everyday life, such as happiness, sadness, anger, hatred, and also more serious mental health issues such as depression and psychosis. There are references to the causation of these emotional disturbances and also ways to get out of them. We studied both the Gurumukhi version and the English translation of the Guru Granth Sahib to understand what it had to say about depression, its henomenology, and religious prescriptions for recovery. We discuss these descriptions in this paper and understand its meaning within the context of clinical depression. Such knowledge is important as explicit descriptions about depression and sadness can help encourage culturally appropriate assessment and treatment, as well as promote public health through education. PMID:23858254

  5. Stimulated recall interviews for describing pragmatic epistemology

    NASA Astrophysics Data System (ADS)

    Shubert, Christopher W.; Meredith, Dawn C.

    2015-12-01

    Students' epistemologies affect how and what they learn: do they believe physics is a list of equations, or a coherent and sensible description of the physical world? In order to study these epistemologies as part of curricular assessment, we adopt the resources framework, which posits that students have many productive epistemological resources that can be brought to bear as they learn physics. In previous studies, these epistemologies have been either inferred from behavior in learning contexts or probed through surveys or interviews outside of the learning context. We argue that stimulated recall interviews provide a contextually and interpretively valid method to access students' epistemologies that complement existing methods. We develop a stimulated recall interview methodology to assess a curricular intervention and find evidence that epistemological resources aptly describe student epistemologies.

  6. Accurate vessel width measurement from fundus photographs: a new concept.

    PubMed Central

    Rassam, S M; Patel, V; Brinchmann-Hansen, O; Engvold, O; Kohner, E M

    1994-01-01

    Accurate determination of retinal vessel width measurement is important in the study of the haemodynamic changes that accompany various physiological and pathological states. Currently the width at the half height of the transmittance and densitometry profiles are used as a measure of retinal vessel width. A consistent phenomenon of two 'kick points' on the slopes of the transmittance and densitometry profiles near the base, has been observed. In this study, mathematical models have been formulated to describe the characteristic curves of the transmittance and the densitometry profiles. They demonstrate the kick points being coincident with the edges of the blood column. The horizontal distance across the kick points would therefore indicate the actual blood column width. To evaluate this hypothesis, blood was infused through two lengths of plastic tubing of known diameters, and photographed. In comparison with the known diameters, the half height underestimated the blood column width by 7.33% and 6.46%, while the kick point method slightly overestimated it by 1.40% and 0.34%. These techniques were applied to monochromatic fundus photographs. In comparison with the kick point method, the half height underestimated the blood column width in veins by 16.67% and in arteries by 15.86%. The characteristics of the kick points and their practicality have been discussed. The kick point method may provide the most accurate measurement of vessel width possible from these profiles. Images PMID:8110693

  7. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  8. Can the genetic code be mathematically described?

    PubMed

    Gonzalez, Diego L

    2004-04-01

    From a mathematical point of view, the genetic code is a surjective mapping between the set of the 64 possible three-base codons and the set of 21 elements composed of the 20 amino acids plus the Stop signal. Redundancy and degeneracy therefore follow. In analogy with the genetic code, non-power integer-number representations are also surjective mappings between sets of different cardinality and, as such, also redundant. However, none of the non-power arithmetics studied so far nor other alternative redundant representations are able to match the actual degeneracy of the genetic code. In this paper we develop a slightly more general framework that leads to the following surprising results: i) the degeneracy of the genetic code is mathematically described, ii) a new symmetry is uncovered within this degeneracy, iii) by assigning a binary string to each of the codons, their classification into definite parity classes according to the corresponding sequence of bases is made possible. This last result is particularly appealing in connection with the fact that parity coding is the basis of the simplest strategies devised for error correction in man-made digital data transmission systems.

  9. Describing transport across complex biological interfaces

    NASA Astrophysics Data System (ADS)

    Lervik, A.; Kjelstrup, S.

    2013-05-01

    It has long been known that proteins are capable of transporting ions against a gradient in the chemical potential, using the energy available from a chemical reaction. This is called active transport. A well studied example is the Ca2+-transport by means of hydrolysis of adenosine triphoshpate (ATP) at the surface of the Ca2+-ATPase in sarcoplasmic reticulum. The cycle of events is known to be reversible, and has recently also been associated with a characteristic, and also reversible, heat production. We use the case of the Ca2+-ATPase to present and discuss various central theoretical approaches to describe active transport, with focus on two schools of development, namely the kinetic and the thermodynamic schools. Among the kinetic descriptions, Hill's diagram method gives the most sophisticated description, reducing to the common Post-Albers scheme with simple enzyme kinetic reactions. Among the thermodynamic approaches, we review the now classical approach of Katchalsky and Curran, and its extension to proper pathways by Caplan and Essig, before the most recent development based on mesoscopic theory is outlined. The mesoscopic approach gives a non-linear theory compatible with Hill's most general method when the active transport is isothermal. We show how the old question of scalar-vector coupling is resolved using rules for non-equilibrium thermodynamics for interfaces. Also thermal driving forces can then be accounted for. Essential physical concepts behind all methods are presented and advantages/deficiencies are pointed out. Emphasis is made on the connection to experiments.

  10. Canada issues booklet describing acid rain

    NASA Astrophysics Data System (ADS)

    A booklet recently released by Environment Canada describes acid rain in terms easily understood by the general public. Although Acid Rain — The Facts tends somewhat to give the Canadian side of this intercountry controversial subject, it nevertheless presents some very interesting, simple statistics of interest to people in either the U.S. or Canada. Copies of the booklet can be obtained from Inquiry Environment Canada, Ottawa, Ontario K1A OH3, Canada, tel. 613-997-2800.The booklet points out that acid rain is caused by emissions of sulfur dioxide (SO2) and nitrogen oxides (NOx). Once released into the atmosphere, these substances can be carried long distances by prevailing winds and return to Earth as acidic rain, snow, fog, or dust. The main sources of SO2 emissions in North America are coal-fired power generating stations and nonferrous ore smelters. The main sources of NOx emissions are vehicles and fuel combustion. From economical and environmental viewpoints, Canada believes acid rain is one of the most serious problems presently facing the country: increasing the acidity of more than 20% of Canada's 300,000 lakes to the point that aquatic life is depleted and acidity of soil water and shallow groundwater is increasing, causing decline in forest growth and water fowl populations, and eating away at buildings and monuments. Acid rain is endangering fisheries, tourism, agriculture, and forest resources in an area of 2.6 million km2 (one million square miles) of eastern Canada, about 8% of Canada's gross national product.

  11. Chemometric model for describing Greek traditional sausages.

    PubMed

    Papadima, S N; Arvanitoyannis, I; Bloukas, J G; Fournitzis, G C

    1999-03-01

    Chemical, physical, microbiological and sensory analyses were performed on 31 samples of Greek traditional sausages. The following attributes were recorded: fat 15.49-56.86%, moisture 21.92-65.40%, protein 14.73-26.74%, sodium chloride 2.36-4.13%, nitrites 0.0-3.26 ppm, mean nitrates 38.19 ppm, TBA value 0.42-5.33 mg malonaldehyde/kg, pH 4.74-6.74, water activity (a(w)) 0.88-0.97, firmness 0-64 Zwick units, lightness (L(*)) 25.03-35.37, redness (a(*)) 2.55-11.42, yellowness 4.42-12.96, aerobic plate count 5.48-9.32 cfu/g, lactic acid bacteria (LAB) 5.26-9.08 cfu/g, micrococci/staphylococci 4.11-6.91 cfu/g and Gram (-) bacteria 1.78-6.15 cfu/g. Mean sensory scores ranged from 3.14 to 3.54 on a 5-point hedonic scale. Two statistical analysis programmes (Praxitele and SPSS) were used for characterising and assessing the properties of sausages. The first two principal components (PC1-2) derived by SPSS (50.5% variance) describe more satisfactorily the variance than the corresponding PC1-2, PC1-3 obtained by Praxitele (40.4% variance). High consumer preference was strongly related to satisfactory appearance and strong taste, high LAB count, medium fat content, medium firmness and lightness (L(*)(surface)). Extreme attribute values (high or low) for firmness, moisture and fat content, low salt content and low taste were related to low consumer preference.

  12. Using Metaphorical Models for Describing Glaciers

    NASA Astrophysics Data System (ADS)

    Felzmann, Dirk

    2014-11-01

    To date, there has only been little conceptual change research regarding conceptions about glaciers. This study used the theoretical background of embodied cognition to reconstruct different metaphorical concepts with respect to the structure of a glacier. Applying the Model of Educational Reconstruction, the conceptions of students and scientists regarding glaciers were analysed. Students' conceptions were the result of teaching experiments whereby students received instruction about glaciers and ice ages and were then interviewed about their understandings. Scientists' conceptions were based on analyses of textbooks. Accordingly, four conceptual metaphors regarding the concept of a glacier were reconstructed: a glacier is a body of ice; a glacier is a container; a glacier is a reflexive body and a glacier is a flow. Students and scientists differ with respect to in which context they apply each conceptual metaphor. It was observed, however, that students vacillate among the various conceptual metaphors as they solve tasks. While the subject context of the task activates a specific conceptual metaphor, within the discussion about the solution, the students were able to adapt their conception by changing the conceptual metaphor. Educational strategies for teaching students about glaciers require specific language to activate the appropriate conceptual metaphors and explicit reflection regarding the various conceptual metaphors.

  13. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  14. Describing dengue epidemics: Insights from simple mechanistic models

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2012-09-01

    We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.

  15. Accurate tracking of high dynamic vehicles with translated GPS

    NASA Astrophysics Data System (ADS)

    Blankshain, Kenneth M.

    The GPS concept and the translator processing system (TPS) which were developed for accurate and cost-effective tracking of various types of high dynamic expendable vehicles are described. A technique used by the translator processing system (TPS) to accomplish very accurate high dynamic tracking is presented. Automatic frequency control and fast Fourier transform processes are combined to track 100 g acceleration and 100 g/s jerk with 1-sigma velocity measurement error less than 1 ft/sec.

  16. Nomenclature proposal to describe vocal fold motion impairment.

    PubMed

    Rosen, Clark A; Mau, Ted; Remacle, Marc; Hess, Markus; Eckel, Hans E; Young, VyVy N; Hantzakos, Anastasios; Yung, Katherine C; Dikkers, Frederik G

    2016-08-01

    The terms used to describe vocal fold motion impairment are confusing and not standardized. This results in a failure to communicate accurately and to major limitations of interpreting research studies involving vocal fold impairment. We propose standard nomenclature for reporting vocal fold impairment. Overarching terms of vocal fold immobility and hypomobility are rigorously defined. This includes assessment techniques and inclusion and exclusion criteria for determining vocal fold immobility and hypomobility. In addition, criteria for use of the following terms have been outlined in detail: vocal fold paralysis, vocal fold paresis, vocal fold immobility/hypomobility associated with mechanical impairment of the crico-arytenoid joint and vocal fold immobility/hypomobility related to laryngeal malignant disease. This represents the first rigorously defined vocal fold motion impairment nomenclature system. This provides detailed definitions to the terms vocal fold paralysis and vocal fold paresis.

  17. The remarkable ability of turbulence model equations to describe transition

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper demonstrates how well the k-omega turbulence model describes the nonlinear growth of flow instabilities from laminar flow into the turbulent flow regime. Viscous modifications are proposed for the k-omega model that yield close agreement with measurements and with Direct Numerical Simulation results for channel and pipe flow. These modifications permit prediction of subtle sublayer details such as maximum dissipation at the surface, k approximately y(exp 2) as y approaches 0, and the sharp peak value of k near the surface. With two transition specific closure coefficients, the model equations accurately predict transition for an incompressible flat-plate boundary layer. The analysis also shows why the k-epsilon model is so difficult to use for predicting transition.

  18. Problems in publishing accurate color in IEEE journals.

    PubMed

    Vrhel, Michael J; Trussell, H J

    2002-01-01

    To demonstrate the performance of color image processing algorithms, it is desirable to be able to accurately display color images in archival publications. In poster presentations, the authors have substantial control of the printing process, although little control of the illumination. For journal publication, the authors must rely on professional intermediaries (printers) to accurately reproduce their results. Our previous work describes requirements for accurately rendering images using your own equipment. This paper discusses the problems of dealing with intermediaries and offers suggestions for improved communication and rendering.

  19. Accurate ab Initio Spin Densities.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus

    2012-06-12

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].

  20. Obtaining accurate translations from expressed sequence tags.

    PubMed

    Wasmuth, James; Blaxter, Mark

    2009-01-01

    The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.

  1. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  2. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  3. TURTLE IN SPACE DESCRIBES NEW HUBBLE IMAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    NASA's Hubble Space Telescope has shown us that the shrouds of gas surrounding dying, sunlike stars (called planetary nebulae) come in a variety of strange shapes, from an 'hourglass' to a 'butterfly' to a 'stingray.' With this image of NGC 6210, the Hubble telescope has added another bizarre form to the rogues' gallery of planetary nebulae: a turtle swallowing a seashell. Giving this dying star such a weird name is less of a challenge than trying to figure out how dying stars create these unusual shapes. The larger image shows the entire nebula; the inset picture captures the complicated structure surrounding the dying star. The remarkable features of this nebula are the numerous holes in the inner shells with jets of material streaming from them. These jets produce column-shaped features that are mirrored in the opposite direction. The multiple shells of material ejected by the dying star give this planetary nebula its odd form. In the 'full nebula' image, the brighter central region looks like a 'nautilus shell'; the fainter outer structure (colored red) a 'tortoise.' The dying star is the white dot in the center. Both pictures are composite images based on observations taken Aug. 6, 1997 with the telescope's Wide Field and Planetary Camera 2. Material flung off by this central star is streaming out of holes it punched in the nautilus shell. At least four jets of material can be seen in the 'full nebula' image: a pair near 6 and 12 o'clock and another near 2 and 8 o'clock. In each pair, the jets are directly opposite each other, exemplifying their 'bipolar' nature. The jets are thought to be driven by a 'fast wind' - material propelled by radiation from the hot central star. In the inner 'nautilus' shell, bright rims outline the escape holes created by this 'wind,' such as the one at 2 o'clock. This same 'wind' appears to give rise to the prominent outer jet in the same direction. The hole in the inner shell acts like a hose nozzle, directing the flow of

  4. TURTLE IN SPACE DESCRIBES NEW HUBBLE IMAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    NASA's Hubble Space Telescope has shown us that the shrouds of gas surrounding dying, sunlike stars (called planetary nebulae) come in a variety of strange shapes, from an 'hourglass' to a 'butterfly' to a 'stingray.' With this image of NGC 6210, the Hubble telescope has added another bizarre form to the rogues' gallery of planetary nebulae: a turtle swallowing a seashell. Giving this dying star such a weird name is less of a challenge than trying to figure out how dying stars create these unusual shapes. The larger image shows the entire nebula; the inset picture captures the complicated structure surrounding the dying star. The remarkable features of this nebula are the numerous holes in the inner shells with jets of material streaming from them. These jets produce column-shaped features that are mirrored in the opposite direction. The multiple shells of material ejected by the dying star give this planetary nebula its odd form. In the 'full nebula' image, the brighter central region looks like a 'nautilus shell'; the fainter outer structure (colored red) a 'tortoise.' The dying star is the white dot in the center. Both pictures are composite images based on observations taken Aug. 6, 1997 with the telescope's Wide Field and Planetary Camera 2. Material flung off by this central star is streaming out of holes it punched in the nautilus shell. At least four jets of material can be seen in the 'full nebula' image: a pair near 6 and 12 o'clock and another near 2 and 8 o'clock. In each pair, the jets are directly opposite each other, exemplifying their 'bipolar' nature. The jets are thought to be driven by a 'fast wind' - material propelled by radiation from the hot central star. In the inner 'nautilus' shell, bright rims outline the escape holes created by this 'wind,' such as the one at 2 o'clock. This same 'wind' appears to give rise to the prominent outer jet in the same direction. The hole in the inner shell acts like a hose nozzle, directing the flow of

  5. Automated determination of parameters describing power spectra of micrograph images in electron microscopy.

    PubMed

    Huang, Zhong; Baldwin, Philip R; Mullapudi, Srinivas; Penczek, Pawel A

    2003-01-01

    The current theory of image formation in electron microscopy has been semi-quantitatively successful in describing data. The theory involves parameters due to the transfer function of the microscope (defocus, spherical aberration constant, and amplitude constant ratio) as well as parameters used to describe the background and attenuation of the signal. We present empirical evidence that at least one of the features of this model has not been well characterized. Namely the spectrum of the noise background is not accurately described by a Gaussian and associated "B-factor;" this becomes apparent when one studies high-quality far-from focus data. In order to have both our analysis and conclusions free from any innate bias, we have approached the questions by developing an automated fitting algorithm. The most important features of this routine, not currently found in the literature, are (i). a process for determining the cutoff for those frequencies below which observations and the currently adopted model are not in accord, (ii). a method for determining the resolution at which no more signal is expected to exist, and (iii). a parameter-with units of spatial frequency-that characterizes which frequencies mainly contribute to the signal. Whereas no general relation is seen to exist between either of these two quantities and the defocus, a simple empirical relationship approximately relates all three.

  6. Apoplexia uteri: a rarely described post-mortem finding.

    PubMed

    Beggan, C; Jaber, K; Leader, M

    2013-08-01

    We present a case of apoplexia uteri, a rarely described condition of haemorrhagic necrosis in an atrophic endometrium and myometrium associated with terminal stress. This entity is well recognised in older literature but few recent publications have addressed this condition. It is thought to occur in association with hypoperfusion with passive hyperaemia and reperfusion injury. This case serves to highlight this rarely encountered entity as a possible cause of haemorrhage in an atrophic endometrium in the 'perimortem' period. Incidental findings are occasionally observed in the course of forensic autopsy practice and knowledge of rarely encountered entities, such as that described in this case, is essential to prevent diagnostic uncertainty and misdiagnosis.

  7. Spatial-filter models to describe IC lithographic behavior

    NASA Astrophysics Data System (ADS)

    Stirniman, John P.; Rieger, Michael L.

    1997-07-01

    Proximity correction systems require an accurate, fast way to predict how a pattern configuration will transfer to the wafer. In this paper we present an efficient method for modeling the pattern transfer process based on Dennis Gabor's `theory of communication'. This method is based on a `convolution form' where any 2D transfer process can be modeled with a set of linear, 2D spatial filters, even when the transfer process is non-linear. We will show that this form is a general case from which other well-known process simulation models can be derived. Furthermore, we will demonstrate that the convolution form can be used to model observed phenomena, even when the physical mechanisms involved are unknown.

  8. Conjugated Molecules Described by a One-Dimensional Dirac Equation.

    PubMed

    Ernzerhof, Matthias; Goyer, Francois

    2010-06-08

    Starting from the Hückel Hamiltonian of conjugated hydrocarbon chains (ethylene, allyl radical, butadiene, pentadienyl radical, hexatriene, etc.), we perform a simple unitary transformation and obtain a Dirac matrix Hamiltonian. Thus already small molecules are described exactly in terms of a discrete Dirac equation, the continuum limit of which yields a one-dimensional Dirac Hamiltonian. Augmenting this Hamiltonian with specially adapted boundary conditions, we find that all the orbitals of the unsaturated hydrocarbon chains are reproduced by the continuous Dirac equation. However, only orbital energies close to the highest occupied molecular orbital/lowest unoccupied molecular orbital energy are accurately predicted by the Dirac equation. Since it is known that a continuous Dirac equation describes the electronic structure of graphene around the Fermi energy, our findings answer the question to what extent this peculiar electronic structure is already developed in small molecules containing a delocalized π-electron system. We illustrate how the electronic structure of small polyenes carries over to a certain class of rectangular graphene sheets and eventually to graphene itself. Thus the peculiar electronic structure of graphene extends to a large degree to the smallest unsaturated molecule (ethylene).

  9. Accurate radio positions with the Tidbinbilla interferometer

    NASA Technical Reports Server (NTRS)

    Batty, M. J.; Gulkis, S.; Jauncey, D. L.; Rayner, P. T.

    1979-01-01

    The Tidbinbilla interferometer (Batty et al., 1977) is designed specifically to provide accurate radio position measurements of compact radio sources in the Southern Hemisphere with high sensitivity. The interferometer uses the 26-m and 64-m antennas of the Deep Space Network at Tidbinbilla, near Canberra. The two antennas are separated by 200 m on a north-south baseline. By utilizing the existing antennas and the low-noise traveling-wave masers at 2.29 GHz, it has been possible to produce a high-sensitivity instrument with a minimum of capital expenditure. The north-south baseline ensures that a good range of UV coverage is obtained, so that sources lying in the declination range between about -80 and +30 deg may be observed with nearly orthogonal projected baselines of no less than about 1000 lambda. The instrument also provides high-accuracy flux density measurements for compact radio sources.

  10. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  11. ELODIE: A spectrograph for accurate radial velocity measurements.

    NASA Astrophysics Data System (ADS)

    Baranne, A.; Queloz, D.; Mayor, M.; Adrianzyk, G.; Knispel, G.; Kohler, D.; Lacroix, D.; Meunier, J.-P.; Rimbaud, G.; Vin, A.

    1996-10-01

    The fibre-fed echelle spectrograph of Observatoire de Haute-Provence, ELODIE, is presented. This instrument has been in operation since the end of 1993 on the 1.93 m telescope. ELODIE is designed as an updated version of the cross-correlation spectrometer CORAVEL, to perform very accurate radial velocity measurements such as needed in the search, by Doppler shift, for brown-dwarfs or giant planets orbiting around nearby stars. In one single exposure a spectrum at a resolution of 42000 (λ/{DELTA}λ) ranging from 3906A to 6811A is recorded on a 1024x1024 CCD. This performance is achieved by using a tanθ=4 echelle grating and a combination of a prism and a grism as cross-disperser. An automatic on-line data treatment reduces all the ELODIE echelle spectra and computes cross-correlation functions. The instrument design and the data reduction algorithms are described in this paper. The efficiency and accuracy of the instrument and its long term instrumental stability allow us to measure radial velocities with an accuracy better than 15m/s for stars up to 9th magnitude in less than 30 minutes exposure time. Observations of 16th magnitude stars are also possible to measure velocities at about 1km/s accuracy. For classic spectroscopic studies (S/N>100) 9th magnitude stars can be observed in one hour exposure time.

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  18. A Self-Instructional Device for Conditioning Accurate Prosody.

    ERIC Educational Resources Information Center

    Buiten, Roger; Lane, Harlan

    1965-01-01

    A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…

  19. Foresight begins with FMEA. Delivering accurate risk assessments.

    PubMed

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  20. Preparing Rapid, Accurate Construction Cost Estimates with a Personal Computer.

    ERIC Educational Resources Information Center

    Gerstel, Sanford M.

    1986-01-01

    An inexpensive and rapid method for preparing accurate cost estimates of construction projects in a university setting, using a personal computer, purchased software, and one estimator, is described. The case against defined estimates, the rapid estimating system, and adjusting standard unit costs are discussed. (MLW)

  1. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    SciTech Connect

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify the

  2. Design of aquifer remediation systems: (1) describing hydraulic structure and NAPL architecture using tracers.

    PubMed

    Enfield, Carl G; Wood, A Lynn; Espinoza, Felipe P; Brooks, Michael C; Annable, Michael; Rao, P S C

    2005-12-01

    Aquifer heterogeneity (structure) and NAPL distribution (architecture) are described based on tracer data. An inverse modelling approach that estimates the hydraulic structure and NAPL architecture based on a Lagrangian stochastic model where the hydraulic structure is described by one or more populations of lognormally distributed travel times and the NAPL architecture is selected from eight possible assumed distributions. Optimization of the model parameters for each tested realization is based on the minimization of the sum of the square residuals between the log of measured tracer data and model predictions for the same temporal observation. For a given NAPL architecture the error is reduced with each added population. Model selection was based on a fitness which penalized models for increasing complexity. The technique is demonstrated under a range of hydrologic and contaminant settings using data from three small field-scale tracer tests: the first implementation at an LNAPL site using a line-drive flow pattern, the second at a DNAPL site with an inverted five-spot flow pattern, and the third at the same DNAPL site using a vertical circulation flow pattern. The Lagrangian model was capable of accurately duplicating experimentally derived tracer breakthrough curves, with a correlation coefficient of 0.97 or better. Furthermore, the model estimate of the NAPL volume is similar to the estimates based on moment analysis of field data.

  3. Noninvasive hemoglobin monitoring: how accurate is enough?

    PubMed

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  4. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  5. Accurate upper body rehabilitation system using kinect.

    PubMed

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit

    2016-08-01

    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  6. Memory conformity affects inaccurate memories more than accurate memories.

    PubMed

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  8. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  9. The MATPHOT Algorithm for Accurate and Precise Stellar Photometry and Astrometry Using Discrete Point Spread Functions

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2004-12-01

    I describe the key features of my MATPHOT algorithm for accurate and precise stellar photometry and astrometry using discrete Point Spread Functions. A discrete Point Spread Function (PSF) is a sampled version of a continuous two-dimensional PSF. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. The MATPHOT algorithm shifts discrete PSFs within an observational model using a 21-pixel-wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. The MATPHOT algorithm achieves accurate and precise stellar photometry and astrometry of undersampled CCD observations by using supersampled discrete PSFs that are sampled 2, 3, or more times more finely than the observational data. I have written a C-language computer program called MPD which is based on the current implementation of the MATPHOT algorithm; all source code and documentation for MPD and support software is freely available at the following website: http://www.noao.edu/staff/mighell/matphot . I demonstrate the use of MPD and present a detailed MATPHOT analysis of simulated James Webb Space Telescope observations which demonstrates that millipixel relative astrometry and millimag photometric accuracy is achievable with very complicated space-based discrete PSFs. This work was supported by a grant from the National Aeronautics and Space Administration (NASA), Interagency Order No. S-13811-G, which was awarded by the Applied Information Systems Research (AISR) Program of NASA's Science Mission Directorate.

  10. Specific Heat Anomalies in Solids Described by a Multilevel Model

    NASA Astrophysics Data System (ADS)

    Souza, Mariano de; Paupitz, Ricardo; Seridonio, Antonio; Lagos, Roberto E.

    2016-04-01

    In the field of condensed matter physics, specific heat measurements can be considered as a pivotal experimental technique for characterizing the fundamental excitations involved in a certain phase transition. Indeed, phase transitions involving spin (de Souza et al. Phys. B Condens. Matter 404, 494 (2009) and Manna et al. Phys. Rev. Lett. 104, 016403 (2010)), charge (Pregelj et al. Phys. Rev. B 82, 144438 (2010)), lattice (Jesche et al. Phys. Rev. B 81, 134525 (2010)) (phonons) and orbital degrees of freedom, the interplay between ferromagnetism and superconductivity (Jesche et al. Phys. Rev. B 86, 020501 (2012)), Schottky-like anomalies in doped compounds (Lagos et al. Phys. C Supercond. 309, 170 (1998)), electronic levels in finite correlated systems (Macedo and Lagos J. Magn. Magn. Mater. 226, 105 (2001)), among other features, can be captured by means of high-resolution calorimetry. Furthermore, the entropy change associated with a first-order phase transition, no matter its nature, can be directly obtained upon integrating the specific heat over T, i.e., C( T)/ T, in the temperature range of interest. Here, we report on a detailed analysis of the two-peak specific heat anomalies observed in several materials. Employing a simple multilevel model, varying the spacing between the energy levels Δ i = ( E i - E 0) and the degeneracy of each energy level g i , we derive the required conditions for the appearance of such anomalies. Our findings indicate that a ratio of {Δ }2/{Δ }1thickapprox 10 between the energy levels and a high degeneracy of one of the energy levels define the two-peaks regime in the specific heat. Our approach accurately matches recent experimental results. Furthermore, using a mean-field approach, we calculate the specific heat of a degenerate Schottky-like system undergoing a ferromagnetic (FM) phase transition. Our results reveal that as the degeneracy is increased the Schottky maximum in the specific heat becomes narrow while the peak

  11. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  12. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    NASA Technical Reports Server (NTRS)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  13. Judgements about the relation between force and trajectory variables in verbally described ballistic projectile motion.

    PubMed

    White, Peter A

    2013-01-01

    How accurate are explicit judgements about familiar forms of object motion, and how are they made? Participants judged the relations between force exerted in kicking a soccer ball and variables that define the trajectory of the ball: launch angle, maximum height attained, and maximum distance reached. Judgements tended to conform to a simple heuristic that judged force tends to increase as maximum height and maximum distance increase, with launch angle not being influential. Support was also found for the converse prediction, that judged maximum height and distance tend to increase as the amount of force described in the kick increases. The observed judgemental tendencies did not resemble the objective relations, in which force is a function of interactions between the trajectory variables. This adds to a body of research indicating that practical knowledge based on experiences of actions on objects is not available to the processes that generate judgements in higher cognition and that such judgements are generated by simple rules that do not capture the objective interactions between the physical variables.

  14. An alternating renewal process describes the buildup of perceptual segregation

    PubMed Central

    Steele, Sara A.; Tranchina, Daniel; Rinzel, John

    2015-01-01

    For some ambiguous scenes perceptual conflict arises between integration and segregation. Initially, all stimulus features seem integrated. Then abruptly, perhaps after a few seconds, a segregated percept emerges. For example, segregation of acoustic features into streams may require several seconds. In behavioral experiments, when a subject's reports of stream segregation are averaged over repeated trials, one obtains a buildup function, a smooth time course for segregation probability. The buildup function has been said to reflect an underlying mechanism of evidence accumulation or adaptation. During long duration stimuli perception may alternate between integration and segregation. We present a statistical model based on an alternating renewal process (ARP) that generates buildup functions without an accumulative process. In our model, perception alternates during a trial between different groupings, as in perceptual bistability, with random and independent dominance durations sampled from different percept-specific probability distributions. Using this theory, we describe the short-term dynamics of buildup observed on short trials in terms of the long-term statistics of percept durations for the two alternating perceptual organizations. Our statistical-dynamics model describes well the buildup functions and alternations in simulations of pseudo-mechanistic neuronal network models with percept-selective populations competing through mutual inhibition. Even though the competition model can show history dependence through slow adaptation, our statistical switching model, that neglects history, predicts well the buildup function. We propose that accumulation is not a necessary feature to produce buildup. Generally, if alternations between two states exhibit independent durations with stationary statistics then the associated buildup function can be described by the statistical dynamics of an ARP. PMID:25620927

  15. Asphere, O asphere, how shall we describe thee?

    NASA Astrophysics Data System (ADS)

    Forbes, G. W.; Brophy, C. P.

    2008-09-01

    Two key criteria govern the characterization of nominal shapes for aspheric optical surfaces. An efficient representation describes the spectrum of relevant shapes to the required accuracy by using the fewest decimal digits in the associated coefficients. Also, a representation is more effective if it can, in some way, facilitate other processes - such as optical design, tolerancing, or direct human interpretation. With the development of better tools for their design, metrology, and fabrication, aspheric optics are becoming ever more pervasive. As part of this trend, aspheric departures of up to a thousand microns or more must be characterized at almost nanometre precision. For all but the simplest of shapes, this is not as easy as it might sound. Efficiency is therefore increasingly important. Further, metrology tools continue to be one of the weaker links in the cost-effective production of aspheric optics. Interferometry particularly struggles to deal with steep slopes in aspheric departure. Such observations motivated the ideas described in what follows for modifying the conventional description of rotationally symmetric aspheres to use orthogonal bases that boost efficiency. The new representations can facilitate surface tolerancing as well as the design of aspheres with cost-effective metrology options. These ideas enable the description of aspheric shapes in terms of decompositions that not only deliver improved efficiency and effectiveness, but that are also shown to admit direct interpretations. While it's neither poetry nor a cure-all, an old blight can be relieved.

  16. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    PubMed Central

    Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.

    2015-01-01

    Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results

  17. The RMT method for describing many-electron atoms in intense short laser pulses

    NASA Astrophysics Data System (ADS)

    Lysaght, M. A.; Moore, L. R.; Nikolopoulos, L. A. A.; Parker, J. S.; van der Hart, H. W.; Taylor, K. T.

    2012-11-01

    We describe how we have developed an ab initio R-Matrix incorporating Time (RMT) method to provide an accurate description of the single ionization of a general many-electron atom exposed to short intense laser pulses. The new method implements the "division-of-space" concept central to R-matrix theory and takes over the sophisticated time-propagation algorithms of the HELIUM code. We have tested the accuracy of the new method by calculating multiphoton ionization rates of He and Ne and have found excellent agreement with other highly accurate and well-established methods.

  18. On the terminology for describing the length-force relationship and its changes in airway smooth muscle.

    PubMed

    Bai, Tony R; Bates, Jason H T; Brusasco, Vito; Camoretti-Mercado, Blanca; Chitano, Pasquale; Deng, Lin Hong; Dowell, Maria; Fabry, Ben; Ford, Lincoln E; Fredberg, Jeffrey J; Gerthoffer, William T; Gilbert, Susan H; Gunst, Susan J; Hai, Chi-Ming; Halayko, Andrew J; Hirst, Stuart J; James, Alan L; Janssen, Luke J; Jones, Keith A; King, Greg G; Lakser, Oren J; Lambert, Rodney K; Lauzon, Anne-Marie; Lutchen, Kenneth R; Maksym, Geoffrey N; Meiss, Richard A; Mijailovich, Srboljub M; Mitchell, Howard W; Mitchell, Richard W; Mitzner, Wayne; Murphy, Thomas M; Paré, Peter D; Schellenberg, R Robert; Seow, Chun Y; Sieck, Gary C; Smith, Paul G; Smolensky, Alex V; Solway, Julian; Stephens, Newman L; Stewart, Alastair G; Tang, Dale D; Wang, Lu

    2004-12-01

    The observation that the length-force relationship in airway smooth muscle can be shifted along the length axis by accommodating the muscle at different lengths has stimulated great interest. In light of the recent understanding of the dynamic nature of length-force relationship, many of our concepts regarding smooth muscle mechanical properties, including the notion that the muscle possesses a unique optimal length that correlates to maximal force generation, are likely to be incorrect. To facilitate accurate and efficient communication among scientists interested in the function of airway smooth muscle, a revised and collectively accepted nomenclature describing the adaptive and dynamic nature of the length-force relationship will be invaluable. Setting aside the issue of underlying mechanism, the purpose of this article is to define terminology that will aid investigators in describing observed phenomena. In particular, we recommend that the term "optimal length" (or any other term implying a unique length that correlates with maximal force generation) for airway smooth muscle be avoided. Instead, the in situ length or an arbitrary but clearly defined reference length should be used. We propose the usage of "length adaptation" to describe the phenomenon whereby the length-force curve of a muscle shifts along the length axis due to accommodation of the muscle at different lengths. We also discuss frequently used terms that do not have commonly accepted definitions that should be used cautiously.

  19. Accurate energy levels for singly ionized platinum (Pt II)

    NASA Technical Reports Server (NTRS)

    Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Engleman, Rolf, Jr.

    1988-01-01

    New observations of the spectrum of Pt II have been made with hollow-cathode lamps. The region from 1032 to 4101 A was observed photographically with a 10.7-m normal-incidence spectrograph. The region from 2245 to 5223 A was observed with a Fourier-transform spectrometer. Wavelength measurements were made for 558 lines. The uncertainties vary from 0.0005 to 0.004 A. From these measurements and three parity-forbidden transitions in the infrared, accurate values were determined for 28 even and 72 odd energy levels of Pt II.

  20. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  1. On the Accurate Prediction of CME Arrival At the Earth

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Hess, Phillip

    2016-07-01

    We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.

  2. An accurate geometric distance to the compact binary SS Cygni vindicates accretion disc theory.

    PubMed

    Miller-Jones, J C A; Sivakoff, G R; Knigge, C; Körding, E G; Templeton, M; Waagen, E O

    2013-05-24

    Dwarf novae are white dwarfs accreting matter from a nearby red dwarf companion. Their regular outbursts are explained by a thermal-viscous instability in the accretion disc, described by the disc instability model that has since been successfully extended to other accreting systems. However, the prototypical dwarf nova, SS Cygni, presents a major challenge to our understanding of accretion disc theory. At the distance of 159 ± 12 parsecs measured by the Hubble Space Telescope, it is too luminous to be undergoing the observed regular outbursts. Using very long baseline interferometric radio observations, we report an accurate, model-independent distance to SS Cygni that places the source substantially closer at 114 ± 2 parsecs. This reconciles the source behavior with our understanding of accretion disc theory in accreting compact objects.

  3. A six-parameter space to describe galaxy diversification

    NASA Astrophysics Data System (ADS)

    Fraix-Burnet, D.; Chattopadhyay, T.; Chattopadhyay, A. K.; Davoust, E.; Thuillard, M.

    2012-09-01

    Context. The diversification of galaxies is caused by transforming events such as accretion, interaction, or mergers. These explain the formation and evolution of galaxies, which can now be described by many observables. Multivariate analyses are the obvious tools to tackle the available datasets and understand the differences between different kinds of objects. However, depending on the method used, redundancies, incompatibilities, or subjective choices of the parameters can diminish the usefulness of these analyses. The behaviour of the available parameters should be analysed before any objective reduction in the dimensionality and any subsequent clustering analyses can be undertaken, especially in an evolutionary context. Aims: We study a sample of 424 early-type galaxies described by 25 parameters, 10 of which are Lick indices, to identify the most discriminant parameters and construct an evolutionary classification of these objects. Methods: Four independent statistical methods are used to investigate the discriminant properties of the observables and the partitioning of the 424 galaxies: principal component analysis, K-means cluster analysis, minimum contradiction analysis, and Cladistics. Results: The methods agree in terms of six parameters: central velocity dispersion, disc-to-bulge ratio, effective surface brightness, metallicity, and the line indices NaD and OIII. The partitioning found using these six parameters, when projected onto the fundamental plane, looks very similar to the partitioning obtained previously for a totally different sample and based only on the parameters of the fundamental plane. Two additional groups are identified here, and we are able to provide some more constraints on the assembly history of galaxies within each group thanks to the larger number of parameters. We also identify another "fundamental plane" with the absolute K magnitude, the linear diameter, and the Lick index Hβ. We confirm that the Mg b vs. velocity dispersion

  4. Male powerlifting performance described from the viewpoint of complex systems.

    PubMed

    García-Manso, J M; Martín-González, J M; Da Silva-Grigoletto, M E; Vaamonde, D; Benito, P; Calderón, J

    2008-04-07

    This paper reflects on the factors that condition performance in powerlifting and proposes that the result-generating process is inadequately described by the allometric equations commonly used. We analysed the scores of 1812 lifters belonging to all body mass categories, and analysed the changes in the results achieved in each weight category and by each competitor. Current performance-predicting methods take into account biological variables, paying no heed to other competition features. Performance in male powerlifting (as in other strength sports) behaves as a self-organised system with non-linear interactions between its components. Thus, multiple internal and external elements must condition changes in a competitor's score, the most important being body mass, body size, the number of practitioners, and the concurrency of favourable factors in one individual. It was observed that each behaved in a specific form in the high level, according to the individuals' circumstances, which make up the main elements of the competitive system in every category. In powerlifting, official weight categories are generally organised in three different groups: light (<52.0 to <60 kg), medium (<67.5 to <90.0 kg) and heavy (<100 to >125 kg) lifter categories, each one of them with specific allometric exponents. The exponent should be revised periodically, especially with regard to the internal dynamics of the category, and adjusted according to possible changes affecting competition.

  5. Inference of random walk models to describe leukocyte migration

    NASA Astrophysics Data System (ADS)

    Jones, Phoebe J. M.; Sim, Aaron; Taylor, Harriet B.; Bugeon, Laurence; Dallman, Magaret J.; Pereira, Bernard; Stumpf, Michael P. H.; Liepe, Juliane

    2015-12-01

    While the majority of cells in an organism are static and remain relatively immobile in their tissue, migrating cells occur commonly during developmental processes and are crucial for a functioning immune response. The mode of migration has been described in terms of various types of random walks. To understand the details of the migratory behaviour we rely on mathematical models and their calibration to experimental data. Here we propose an approximate Bayesian inference scheme to calibrate a class of random walk models characterized by a specific, parametric particle re-orientation mechanism to observed trajectory data. We elaborate the concept of transition matrices (TMs) to detect random walk patterns and determine a statistic to quantify these TM to make them applicable for inference schemes. We apply the developed pipeline to in vivo trajectory data of macrophages and neutrophils, extracted from zebrafish that had undergone tail transection. We find that macrophage and neutrophils exhibit very distinct biased persistent random walk patterns, where the strengths of the persistence and bias are spatio-temporally regulated. Furthermore, the movement of macrophages is far less persistent than that of neutrophils in response to wounding.

  6. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  7. Accurate nuclear radii and binding energies from a chiral interaction

    DOE PAGES

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; ...

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shellmore » nuclei are in reasonable agreement with experiment.« less

  8. Accurate nuclear radii and binding energies from a chiral interaction

    SciTech Connect

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  9. Efficient and accurate computation of the incomplete Airy functions

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  10. Six rules for accurate effective forecasting.

    PubMed

    Saffo, Paul

    2007-01-01

    The primary goal of forecasting is to identify the full range of possibilities facing a company, society, or the world at large. In this article, Saffo demythologizes the forecasting process to help executives become sophisticated and participative consumers of forecasts, rather than passive absorbers. He illustrates how to use forecasts to at once broaden understanding of possibilities and narrow the decision space within which one must exercise intuition. The events of 9/11, for example, were a much bigger surprise than they should have been. After all, airliners flown into monuments were the stuff of Tom Clancy novels in the 1990s, and everyone knew that terrorists had a very personal antipathy toward the World Trade Center. So why was 9/11 such a surprise? What can executives do to avoid being blind-sided by other such wild cards, be they radical shifts in markets or the seemingly sudden emergence of disruptive technologies? In describing what forecasters are trying to achieve, Saffo outlines six simple, commonsense rules that smart managers should observe as they embark on a voyage of discovery with professional forecasters. Map a cone of uncertainty, he advises, look for the S curve, embrace the things that don't fit, hold strong opinions weakly, look back twice as far as you look forward, and know when not to make a forecast.

  11. Accurate attitude determination of the LACE satellite

    NASA Technical Reports Server (NTRS)

    Miglin, M. F.; Campion, R. E.; Lemos, P. J.; Tran, T.

    1993-01-01

    The Low-power Atmospheric Compensation Experiment (LACE) satellite, launched in February 1990 by the Naval Research Laboratory, uses a magnetic damper on a gravity gradient boom and a momentum wheel with its axis perpendicular to the plane of the orbit to stabilize and maintain its attitude. Satellite attitude is determined using three types of sensors: a conical Earth scanner, a set of sun sensors, and a magnetometer. The Ultraviolet Plume Instrument (UVPI), on board LACE, consists of two intensified CCD cameras and a gimbal led pointing mirror. The primary purpose of the UVPI is to image rocket plumes from space in the ultraviolet and visible wavelengths. Secondary objectives include imaging stars, atmospheric phenomena, and ground targets. The problem facing the UVPI experimenters is that the sensitivity of the LACF satellite attitude sensors is not always adequate to correctly point the UVPI cameras. Our solution is to point the UVPI cameras at known targets and use the information thus gained to improve attitude measurements. This paper describes the three methods developed to determine improved attitude values using the UVPI for both real-time operations and post observation analysis.

  12. Optical Chopper Assembly for the Mars Observer

    NASA Technical Reports Server (NTRS)

    Allen, Terry

    1993-01-01

    This paper describes the Honeywell-developed Optical Chopper Assembly (OCA), a component of Mars Observer spacecraft's Pressure Modulator Infrared Radiometer (PMIRR) science experiment, which will map the Martian atmosphere during 1993 to 1995. The OCA is unique because of its constant accurate rotational speed, low electrical power consumption, and long-life requirements. These strict and demanding requirements were achieved by use of a number of novel approaches.

  13. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  14. Towards a Density Functional Theory Exchange-Correlation Functional able to describe localization/delocalization

    NASA Astrophysics Data System (ADS)

    Mattsson, Ann E.; Wills, John M.

    2013-03-01

    The inability to computationally describe the physics governing the properties of actinides and their alloys is the poster child of failure of existing Density Functional Theory exchange-correlation functionals. The intricate competition between localization and delocalization of the electrons, present in these materials, exposes the limitations of functionals only designed to properly describe one or the other situation. We will discuss the manifestation of this competition in real materials and propositions on how to construct a functional able to accurately describe properties of these materials. I addition we will discuss both the importance of using the Dirac equation to describe the relativistic effects in these materials, and the connection to the physics of transition metal oxides. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  15. The allele distribution in next-generation sequencing data sets is accurately described as the result of a stochastic branching process.

    PubMed

    Heinrich, Verena; Stange, Jens; Dickhaus, Thorsten; Imkeller, Peter; Krüger, Ulrike; Bauer, Sebastian; Mundlos, Stefan; Robinson, Peter N; Hecht, Jochen; Krawitz, Peter M

    2012-03-01

    With the availability of next-generation sequencing (NGS) technology, it is expected that sequence variants may be called on a genomic scale. Here, we demonstrate that a deeper understanding of the distribution of the variant call frequencies at heterozygous loci in NGS data sets is a prerequisite for sensitive variant detection. We model the crucial steps in an NGS protocol as a stochastic branching process and derive a mathematical framework for the expected distribution of alleles at heterozygous loci before measurement that is sequencing. We confirm our theoretical results by analyzing technical replicates of human exome data and demonstrate that the variance of allele frequencies at heterozygous loci is higher than expected by a simple binomial distribution. Due to this high variance, mutation callers relying on binomial distributed priors are less sensitive for heterozygous variants that deviate strongly from the expected mean frequency. Our results also indicate that error rates can be reduced to a greater degree by technical replicates than by increasing sequencing depth.

  16. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  17. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  18. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  19. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  20. An Accurate, Simplified Model Intrabeam Scattering

    SciTech Connect

    Bane, Karl LF

    2002-05-23

    Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.

  1. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  2. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  3. A new method for describing soil detachment by a single waterdrop impact

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Bieganowski, Andrzej

    2013-04-01

    Soil is one of the elements that determine the water cycle due to its retention ability; it is also a landscape-shaping element and the basis of agricultural production. It is a limited and non-renewable element of the geographical environment at a certain stage of Earth's history, and should therefore be protected. One of the physical processes of soil degradation is water erosion. In the first phase, there is detachment of particles eroded from the surface, i.e. splash. Depending on the energy and intensity of precipitation and the terrain features, this can lead to runoff, in the next stage, and in extreme cases to rainwash of soil. Methods used previously in studies of splash were mainly based on weight measurements of collected soil material that had splashed. This requires treatment of the total material collected, as the mass of soil displaced by the impact a single drop is so small that it is not measurable even when using a very accurate weight. In the proposed method of measurements, the splashed soil material was collected on filter paper, allowing determination of the distance over which the displacement of the particles occurred followed by an analysis of the soil material displaced at a given distance under the microscope. As a result of the measurements, the relationships between the following parameters were determined: - the distances of splash, - the surface areas of splash tracks into relation to distance, - the surface area of the solid phase transported over a given distance, - the ratio of the solid phase to the splash track area in relation to distance. Differences were observed between the results obtained for both the soils of different granulometric composition as well as for the same soil with varying humidity. The use of optical methods in the analysis of microscopic images gave new opportunities to describe the initial phase of water erosion - splash. It facilitates analysis of splash (in the laboratory) caused by a single drop of

  4. DETAIL OF PLAQUE DESCRIBING LION SCULPTURES BY ROLAND HINTON PERRY, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL OF PLAQUE DESCRIBING LION SCULPTURES BY ROLAND HINTON PERRY, NORTHWEST ABUTMENT - Connecticut Avenue Bridge, Spans Rock Creek & Potomac Parkway at Connecticut Avenue, Washington, District of Columbia, DC

  5. Library preparation for highly accurate population sequencing of RNA viruses

    PubMed Central

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  6. Accurate determination of the sedimentation flux of concentrated suspensions

    NASA Astrophysics Data System (ADS)

    Martin, J.; Rakotomalala, N.; Salin, D.

    1995-10-01

    Flow rate jumps are used to generate propagating concentration variations in a counterflow stabilized suspension (a liquid fluidized bed). An acoustic technique is used to measure accurately the resulting concentration profiles through the bed. Depending on the experimental conditions, we have observed self-sharpening, or/and self-spreading concentration fronts. Our data are analyzed in the framework of Kynch's theory, providing an accurate determination of the sedimentation flux [CU(C); U(C) is the hindered sedimentation velocity of the suspension] and its derivatives in the concentration range 30%-60%. In the vicinity of the packing concentration, controlling the flow rate has allowed us to increase the maximum packing up to 60%.

  7. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  8. Interacting with image hierarchies for fast and accurate object segmentation

    NASA Astrophysics Data System (ADS)

    Beard, David V.; Eberly, David H.; Hemminger, Bradley M.; Pizer, Stephen M.; Faith, R. E.; Kurak, Charles; Livingston, Mark

    1994-05-01

    Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.

  9. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  10. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  11. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  12. Continuous and discrete describing function analysis of the LST system

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.; Yackel, R. A.

    1973-01-01

    A describing function of the control moment gyros (CMG) frictional nonlinearity is derived using the analytic torque equation. Computer simulation of the simplified Large Space Telescope (LST) system with the analytic torque expression is discussed along with the transfer functions of the sampled-data LST system, and the discrete describing function of the GMC frictionality.

  13. Oral Reading Observation System Observer's Training Manual.

    ERIC Educational Resources Information Center

    Brady, Mary Ella; And Others

    A self-instructional program for use by teachers of the handicapped, this training manual was developed to teach accurate coding with the Oral Reading Observation System (OROS)an observation system designed to code teacher-pupil verbal interaction during oral reading instruction. The body of the manual is organized to correspond to the nine…

  14. Sinusoidal input describing function for hysteresis followed by elementary backlash

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.

    1976-01-01

    The author proposes a new sinusoidal input describing function which accounts for the serial combination of hysteresis followed by elementary backlash in a single nonlinear element. The output of the hysteresis element drives the elementary backlash element. Various analytical forms of the describing function are given, depending on the a/A ratio, where a is the half width of the hysteresis band or backlash gap, and A is the amplitude of the assumed input sinusoid, and on the value of the parameter representing the fraction of a attributed to the backlash characteristic. The negative inverse describing function is plotted on a gain-phase plot, and it is seen that a relatively small amount of backlash leads to domination of the backlash character in the describing function. The extent of the region of the gain-phase plane covered by the describing function is such as to guarantee some form of limit cycle behavior in most closed-loop systems.

  15. A Fully Implicit Time Accurate Method for Hypersonic Combustion: Application to Shock-induced Combustion Instability

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Radhakrishnan, Krishnan

    1994-01-01

    A new fully implicit, time accurate algorithm suitable for chemically reacting, viscous flows in the transonic-to-hypersonic regime is described. The method is based on a class of Total Variation Diminishing (TVD) schemes and uses successive Gauss-Siedel relaxation sweeps. The inversion of large matrices is avoided by partitioning the system into reacting and nonreacting parts, but still maintaining a fully coupled interaction. As a result, the matrices that have to be inverted are of the same size as those obtained with the commonly used point implicit methods. In this paper we illustrate the applicability of the new algorithm to hypervelocity unsteady combustion applications. We present a series of numerical simulations of the periodic combustion instabilities observed in ballistic-range experiments of blunt projectiles flying at subdetonative speeds through hydrogen-air mixtures. The computed frequencies of oscillation are in excellent agreement with experimental data.

  16. Accurate vessel segmentation with constrained B-snake.

    PubMed

    Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi

    2015-08-01

    We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.

  17. Describing Willow Flycatcher habitats: scale perspectives and gender differences

    USGS Publications Warehouse

    Sedgwick, James A.; Knopf, Fritz L.

    1992-01-01

    We compared habitat characteristics of nest sites (female-selected sites) and song perch sites (male-selected sites) with those of sites unused by Willow Flycatchers (Empidonax traillii) at three different scales of vegetation measurement: (1) microplot (central willow [Salix spp.] bush and four adjacent bushes); (2) mesoplot (0.07 ha); and, (3) macroplot (flycatcher territory size). Willow Flycatchers exhibited vegetation preferences at all three scales. Nest sites were distinguished by high willow density and low variability in willow patch size and bush height. Song perch sites were characterized by large central shrubs, low central shrub vigor, and high variability in shrub size. Unused sites were characterized by greater distances between willows and willow patches, less willow coverage, and a smaller riparian zone width than either nest or song perch sites. At all scales, nest sites were situated farther from unused sites in multivariate habitat space than were song perch sites, suggesting (1) a correspondence among scales in their ability to describe Willow Flycatcher habitat, and (2) females are more discriminating in habitat selection than males. Microhabitat differences between male-selected (song perch) and female-selected (nest) sites were evident at the two smaller scales; at the finest scale, the segregation in habitat space between male-selected and female-selected sites was greater than that between male-selected and unused sites. Differences between song perch and nest sites were not apparent at the scale of flycatcher territory size, possibly due to inclusion of (1) both nest and song perch sites, (2) defended, but unused habitat, and/or (3) habitat outside of the territory, in larger scale analyses. The differences between nest and song perch sites at the finer scales reflect their different functions (e.g., nest concealment and microclimatic requirements vs. advertising and territorial defense, respectively), and suggest that the exclusive use

  18. Pharmacokinetic population study to describe cefepime lung concentrations.

    PubMed

    Breilh, D; Saux, M C; Delaisement, C; Fratta, A; Ducint, D; Velly, J F; Couraud, L

    2001-01-01

    Pharmacokinetic parameters of cefepime in 2 g plasma and lung tissue bid over 3 days to achieve the steady-state was studied in 16 patients (15 male, one female) subjected to lung surgery for bronchial epithelioma. The aims of this study were firstly to quantify cefepime lung diffusion with cefepime lung concentrations in comparison with cefepime serum concentrations, and secondly to estimate population pharmacokinetic parameters of cefepime in lung tissue using NONMEM. The mean characteristics of patients were: age, 60 years (range, 51-69 years), weight, 73 kg (range, 62-87 kg) and creatinine clearance, 77 ml/min (range, 62-92 ml/min). Both serum sample (two per patient) and lung sample (one per patient) cefepime concentrations were analysed by HPLC with UV detection. Five groups were made according to the time of sampling after the last cefepime intravenous infusion at the fifth infusion: 0.5 h (n=2), 2 h (n=5), 4 h (n=3), 8 h (n=3) and 12 h (n=3). The cefepime concentration ratio between lung and serum was calculated for each group and statistical analysis show no significant difference between groups. The mean concentration ratio between lung and serum was 101% (range, 70-130%). To explain this observation a two-compartment pharmacokinetic model with a population approach was used to describe pharmacokinetic parameters of cefepime both in lung and in serum. Serum was assimilated at the central compartment and lung was the peripheral compartment. NONMEM was used to estimate the mean and the variance of the pharmacokinetic parameters. Central volume of distribution (V(d)), steady-state volume of distribution (V(ss)), central clearance (CL) and transfer constants (K(cp)) from serum to lung and (K(pc)) from lung to serum were estimated. Central elimination half-life t(1/2Kbeta)was extrapolated from elimination constant beta. Results were: V(d)= 15.62 +/- 2.56 l, V(ss)= 17.58 +/- 2.58 l, CL = 3.65 +/- 1.25 l/h, beta = 0.234 h(-1), t(1/2beta)= 2.96 hours, K(cp)= 12

  19. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  20. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  1. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  2. Nonlinear waves described by the generalized Swift-Hohenberg equation

    NASA Astrophysics Data System (ADS)

    Ryabov, P. N.; Kudryashov, N. A.

    2017-01-01

    We study the wave processes described by the generalized Swift-Hohenberg equation. We show that the traveling wave reduction of this equation does not pass the Kovalevskaya test. Some solitary wave solutions and kink solutions of the generalized Swift-Hohenberg equation are found. We use the pseudo-spectral algorithm to perform the numerical simulation of the wave processes described by the mixed boundary value problem for the generalized Swift-Hohenberg equation. This algorithm was tested on the obtained solutions. Some features of the nonlinear waves evolution described by the generalized Swift-Hohenberg equation are studied.

  3. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  4. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  5. A new and accurate continuum description of moving fronts

    NASA Astrophysics Data System (ADS)

    Johnston, S. T.; Baker, R. E.; Simpson, M. J.

    2017-03-01

    Processes that involve moving fronts of populations are prevalent in ecology and cell biology. A common approach to describe these processes is a lattice-based random walk model, which can include mechanisms such as crowding, birth, death, movement and agent–agent adhesion. However, these models are generally analytically intractable and it is computationally expensive to perform sufficiently many realisations of the model to obtain an estimate of average behaviour that is not dominated by random fluctuations. To avoid these issues, both mean-field (MF) and corrected mean-field (CMF) continuum descriptions of random walk models have been proposed. However, both continuum descriptions are inaccurate outside of limited parameter regimes, and CMF descriptions cannot be employed to describe moving fronts. Here we present an alternative description in terms of the dynamics of groups of contiguous occupied lattice sites and contiguous vacant lattice sites. Our description provides an accurate prediction of the average random walk behaviour in all parameter regimes. Critically, our description accurately predicts the persistence or extinction of the population in situations where previous continuum descriptions predict the opposite outcome. Furthermore, unlike traditional MF models, our approach provides information about the spatial clustering within the population and, subsequently, the moving front.

  6. Storyboard GALILEO CRUISE SCIENCE OPPORTUNITIES describes asteroid encounters

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Storyboard with mosaicked image of an asteroid and entitled GALILEO CRUISE SCIENCE OPPORTUNITIES describes asteroid objectives. These objectives include: first asteroid encounter; surface geology, composition size, shape, mass; and relation of primitive bodies to meteorites.

  7. Describing Simple Data Access Services Version 1.0

    NASA Astrophysics Data System (ADS)

    Plante, Raymond; Delago, Jesus; Harrison, Paul; Tody, Doug; IVOA Registry Working Group; Plante, Raymond

    2013-11-01

    An application that queries or consumes descriptions of VO resources must be able to recognize a resource's support for standard IVOA protocols. This specification describes how to describe a service that supports any of the four fundamental data access protocols Simple Cone Search (SCS), Simple Image Access (SIA), Simple Spectral Access (SSA), Simple Line Access (SLA) using the VOResource XML encoding standard. A key part of this specification is the set of VOResource XML extension schemas that define new metadata that are specific to those protocols. This document describes in particular rules for describing such services within the context of IVOA Registries and data discovery as well as the VO Standard Interface (VOSI) and service selfdescription. In particular, this document spells out the essential markup needed to identify support for a standard protocol and the base URL required to access the interface that supports that protocol.

  8. Zulma Ageitos de Castellanos: Publications and status of described taxa.

    PubMed

    Signorelli, Javier H; Urteaga, Diego; Teso, Valeria

    2015-10-28

    Zulma Ageitos de Castellanos was an Argentinian malacologist working in the "Facultad de Ciencias Naturales y Museo" at La Plata University where she taught invertebrate zoology between 1947 and 1990. Her scientific publications are listed in chronological order. Described genus-group and species-group taxa are listed. Information about the type locality and type material, and taxonomic remarks are also provided. Finally, type material of all described taxa was requested and, when located, illustrated.

  9. Accurate taxonomic assignment of short pyrosequencing reads.

    PubMed

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  10. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  11. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  12. Sparse and accurate high resolution SAR imaging

    NASA Astrophysics Data System (ADS)

    Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian

    2012-05-01

    We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.

  13. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  14. Describing the unusual behavior of children with autism.

    PubMed

    Duchan, J F

    1998-01-01

    The behaviors of children with autism have been described by professionals, by family members, and also by those with autism. This article analyzes four different types of reports that contain descriptions of those with autism: (1) case studies, (2) diagnostic reports and single-subject research studies, (3) family accounts, and (4) autobiographical descriptions. Authors describe the behaviors of those with autism differently depending upon their relationship with the person they are describing, their intended audience, their goals, and the genre they use for conveying their descriptions. Authors were found to use the following types of descriptions, to varying degrees in order to achieve their goals: (1) descriptions of what a child did on a particular occasion; (2) descriptions of what a child typically does or did; (3) descriptions of what a child should have done; (4) descriptions of how behavior was experienced by a child or family member; (5) descriptions of how a third party reported a behavior; (6) metaphoric descriptions of behaviors; and (7) descriptions of how behaviors mesh with traits often associated with autism. A detailed examination of how behaviors of children with autism are described indicates that the way someone with autism is regarded and described is strongly related to what the describer wants to accomplish.

  15. Use of a negative binomial distribution to describe the presence of Sphyrion laevigatum in Genypterus blacodes.

    PubMed

    Peña-Rehbein, Patricio; De los Ríos-Escalante, Patricio; Castro, Raúl; Navarrete, Carolina

    2013-01-01

    This paper describes the frequency and number of Sphyrion laevigatum in the skin of Genypterus blacodes, an important economic resource in Chile. The analysis of a spatial distribution model indicated that the parasites tended to cluster. Variations in the number of parasites per host could be described by a negative binomial distribution. The maximum number of parasites observed per host was two.

  16. Accurate Transposable Element Annotation Is Vital When Analyzing New Genome Assemblies

    PubMed Central

    Platt, Roy N.; Blanco-Berdugo, Laura; Ray, David A.

    2016-01-01

    Transposable elements (TEs) are mobile genetic elements with the ability to replicate themselves throughout the host genome. In some taxa TEs reach copy numbers in hundreds of thousands and can occupy more than half of the genome. The increasing number of reference genomes from nonmodel species has begun to outpace efforts to identify and annotate TE content and methods that are used vary significantly between projects. Here, we demonstrate variation that arises in TE annotations when less than optimal methods are used. We found that across a variety of taxa, the ability to accurately identify TEs based solely on homology decreased as the phylogenetic distance between the queried genome and a reference increased. Next we annotated repeats using homology alone, as is often the case in new genome analyses, and a combination of homology and de novo methods as well as an additional manual curation step. Reannotation using these methods identified a substantial number of new TE subfamilies in previously characterized genomes, recognized a higher proportion of the genome as repetitive, and decreased the average genetic distance within TE families, implying recent TE accumulation. Finally, these finding—increased recognition of younger TEs—were confirmed via an analysis of the postman butterfly (Heliconius melpomene). These observations imply that complete TE annotation relies on a combination of homology and de novo–based repeat identification, manual curation, and classification and that relying on simple, homology-based methods is insufficient to accurately describe the TE landscape of a newly sequenced genome. PMID:26802115

  17. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  18. Accurate Transposable Element Annotation Is Vital When Analyzing New Genome Assemblies.

    PubMed

    Platt, Roy N; Blanco-Berdugo, Laura; Ray, David A

    2016-01-21

    Transposable elements (TEs) are mobile genetic elements with the ability to replicate themselves throughout the host genome. In some taxa TEs reach copy numbers in hundreds of thousands and can occupy more than half of the genome. The increasing number of reference genomes from nonmodel species has begun to outpace efforts to identify and annotate TE content and methods that are used vary significantly between projects. Here, we demonstrate variation that arises in TE annotations when less than optimal methods are used. We found that across a variety of taxa, the ability to accurately identify TEs based solely on homology decreased as the phylogenetic distance between the queried genome and a reference increased. Next we annotated repeats using homology alone, as is often the case in new genome analyses, and a combination of homology and de novo methods as well as an additional manual curation step. Reannotation using these methods identified a substantial number of new TE subfamilies in previously characterized genomes, recognized a higher proportion of the genome as repetitive, and decreased the average genetic distance within TE families, implying recent TE accumulation. Finally, these finding-increased recognition of younger TEs-were confirmed via an analysis of the postman butterfly (Heliconius melpomene). These observations imply that complete TE annotation relies on a combination of homology and de novo-based repeat identification, manual curation, and classification and that relying on simple, homology-based methods is insufficient to accurately describe the TE landscape of a newly sequenced genome.

  19. Enhanced ocean observational capability

    SciTech Connect

    Volpe, A M; Esser, B K

    2000-01-10

    Coastal oceans are vital to world health and sustenance. Technology that enables new observations has always been the driver of discovery in ocean sciences. In this context, we describe the first at sea deployment and operation of an inductively coupled plasma mass spectrometer (ICPMS) for continuous measurement of trace elements in seawater. The purpose of these experiments was to demonstrate that an ICPMS could be operated in a corrosive and high vibration environment with no degradation in performance. Significant advances occurred this past year due to ship time provided by Scripps Institution of Oceanography (UCSD), as well as that funded through this project. Evaluation at sea involved performance testing and characterization of several real-time seawater analysis modes. We show that mass spectrometers can rapidly, precisely and accurately determine ultratrace metal concentrations in seawater, thus allowing high-resolution mapping of large areas of surface seawater. This analytical capability represents a significant advance toward real-time observation and understanding of water mass chemistry in dynamic coastal environments. In addition, a joint LLNL-SIO workshop was convened to define and design new technologies for ocean observation. Finally, collaborative efforts were initiated with atmospheric scientists at LLNL to identify realistic coastal ocean and river simulation models to support real-time analysis and modeling of hazardous material releases in coastal waterways.

  20. Describing small-scale structure in random media using pulse-echo ultrasound

    PubMed Central

    Insana, Michael F.; Wagner, Robert F.; Brown, David G.; Hall, Timothy J.

    2009-01-01

    A method for estimating structural properties of random media is described. The size, number density, and scattering strength of particles are estimated from an analysis of the radio frequency (rf) echo signal power spectrum. Simple correlation functions and the accurate scattering theory of Faran [J. J. Faran, J. Acoust. Soc. Am. 23, 405–418 (1951)], which includes the effects of shear waves, were used separately to model backscatter from spherical particles and thereby describe the structures of the medium. These methods were tested using both glass sphere-in-agar and polystyrene sphere-in-agar scattering media. With the appropriate correlation function, it was possible to measure glass sphere diameters with an accuracy of 20%. It was not possible to accurately estimate the size of polystyrene spheres with the simple spherical and Gaussian correlation models examined because of a significant shear wave contribution. Using the Faran scattering theory for spheres, however, the accuracy for estimating diameters was improved to 10% for both glass and polystyrene scattering media. It was possible to estimate the product of the average scattering particle number density and the average scattering strength per particle, but with lower accuracy than the size estimates. The dependence of the measurement accuracy on the inclusion of shear waves, the wavelength of sound, and medium attenuation are considered, and the implications for describing the structure of biological soft tissues are discussed. PMID:2299033

  1. Accurate ab initio Quartic Force Fields of Cyclic and Bent HC2N Isomers

    NASA Technical Reports Server (NTRS)

    Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J.

    2012-01-01

    Highly correlated ab initio quartic force field (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC2N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted CCSD(T). Dunning s correlation-consistent basis sets cc-pVXZ, X=3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schr dinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by perturbation theory). On the other hand, this procedure (a QFF together with either perturbation theory or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC2N. All three isomers possess significant dipole moments, 3.05D, 3.06D, and 1.71D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.

  2. Accurate ab initio quartic force fields of cyclic and bent HC2N isomers.

    PubMed

    Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J

    2011-12-28

    Highly correlated ab initio quartic force fields (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC(2)N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted as CCSD(T). Dunning's correlation-consistent basis sets cc-pVXZ, X = 3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (PT) (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schrödinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by PT). On the other hand, this procedure (a QFF together with either PT or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC(2)N. All three isomers possess significant dipole moments, 3.05 D, 3.06 D, and 1.71 D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.

  3. Correlation Factors Describing Primary and Spatial Sensations of Sound Fields

    NASA Astrophysics Data System (ADS)

    ANDO, Y.

    2002-11-01

    The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.

  4. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  5. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  6. LSM: perceptually accurate line segment merging

    NASA Astrophysics Data System (ADS)

    Hamid, Naila; Khan, Nazar

    2016-11-01

    Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.

  7. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  8. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  9. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  10. Magnetic ranging tool accurately guides replacement well

    SciTech Connect

    Lane, J.B.; Wesson, J.P. )

    1992-12-21

    This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.

  11. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  12. Describing baseball pitch movement with right-hand rules.

    PubMed

    Bahill, A Terry; Baldwin, David G

    2007-07-01

    The right-hand rules show the direction of the spin-induced deflection of baseball pitches: thus, they explain the movement of the fastball, curveball, slider and screwball. The direction of deflection is described by a pair of right-hand rules commonly used in science and engineering. Our new model for the magnitude of the lateral spin-induced deflection of the ball considers the orientation of the axis of rotation of the ball relative to the direction in which the ball is moving. This paper also describes how models based on somatic metaphors might provide variability in a pitcher's repertoire.

  13. Slices method to describe ray propagation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Aguilar-Gutiérrez, J. F.; Arroyo Carrasco, M. L.; Iturbe-Castillo, M. D.

    2017-01-01

    We describe an alternative method that numerically calculates the trajectory followed by a light ray in rotationally symmetric inhomogeneous media in the paraxial approximation. The medium is divided into thin parallel slices and a radial quadratic refractive index is considered for each slice. The ABCD matrix is calculated in each slice and the trajectory of the ray was obtained. The method is demonstrated considering media with a refractive index distribution used to describe the human eye lens. The results are compared with the exact numerical solution for each particular distribution. In all cases, a good agreement is obtained for the proposed method and the exact numerical solution.

  14. An autocatalytic kinetic model for describing microbial growth during fermentation.

    PubMed

    Ibarz, Albert; Augusto, Pedro E D

    2015-01-01

    The mathematical modelling of the behaviour of microbial growth is widely desired in order to control, predict and design food and bioproduct processing, stability and safety. This work develops and proposes a new semi-empirical mathematical model, based on an autocatalytic kinetic, to describe the microbial growth through its biomass concentration. The proposed model was successfully validated using 15 microbial growth patterns, covering the three most important types of microorganisms in food and biotechnological processing (bacteria, yeasts and moulds). Its main advantages and limitations are discussed, as well as the interpretation of its parameters. It is shown that the new model can be used to describe the behaviour of microbial growth.

  15. A quark transport theory to describe nucleon-nucleon collisions

    NASA Astrophysics Data System (ADS)

    Kalmbach, U.; Vetter, T.; Biró, T. S.; Mosel, U.

    1993-11-01

    On the basis of the Friedberg-Lee model we formulate a semiclassical transport theory to describe the phase-space evolution of nucleon-nucleon collisions on the quark level. The time evolution is given by a Vlasov equation for the quark phase-space distribution and a Klein-Gordon equation for the mean-field describing the nucleon as a soliton bag. The Vlasov equation is solved numerically using an extended test-particle method. We test the confinement mechanism and mean-field effects in (1 + 1)-dimensional simulations.

  16. Motivating operations and terms to describe them: some further refinements.

    PubMed

    Laraway, Sean; Snycerski, Susan; Michael, Jack; Poling, Alan

    2003-01-01

    Over the past decade, behavior analysts have increasingly used the term establishing operation (EO) to refer to environmental events that influence the behavioral effects of operant consequences. Nonetheless, some elements of current terminology regarding EOs may interfere with applied behavior analysts' efforts to predict, control, describe, and understand behavior. The present paper (a) describes how the current conceptualization of the EO is in need of revision, (b) suggests alternative terms, including the generic term motivating operation (MO), and (c) provides examples of MOs and their behavioral effects using articles from the applied behavior analysis literature.

  17. Recursive analytical solution describing artificial satellite motion perturbed by an arbitrary number of zonal terms

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An analytical first order solution has been developed which describes the motion of an artificial satellite perturbed by an arbitrary number of zonal harmonics of the geopotential. A set of recursive relations for the solution, which was deduced from recursive relations of the geopotential, was derived. The method of solution is based on Von-Zeipel's technique applied to a canonical set of two-body elements in the extended phase space which incorporates the true anomaly as a canonical element. The elements are of Poincare type, that is, they are regular for vanishing eccentricities and inclinations. Numerical results show that this solution is accurate to within a few meters after 500 revolutions.

  18. 25. VIEW LOOKING EAST THROUGH 'TUNNEL' DESCRIBED ABOVE. RAILCAR LOADING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. VIEW LOOKING EAST THROUGH 'TUNNEL' DESCRIBED ABOVE. RAILCAR LOADING TUBES AT TOP FOREGROUND, SPERRY CORN ELEVATOR COMPLEX AT RIGHT AND ADJOINING WAREHOUSE AT LEFT - Sperry Corn Elevator Complex, Weber Avenue (North side), West of Edison Street, Stockton, San Joaquin County, CA

  19. 23. FISH CONVEYOR Conveyor described in Photo No. 21. A ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. FISH CONVEYOR Conveyor described in Photo No. 21. A portion of a second conveyor is seen on the left. Vertical post knocked askew and cracked cement base of the conveyor, attest to the condition of the building. - Hovden Cannery, 886 Cannery Row, Monterey, Monterey County, CA

  20. Describing an "Effective" Principal: Perceptions of the Central Office Leaders

    ERIC Educational Resources Information Center

    Parylo, Oksana; Zepeda, Sally J.

    2014-01-01

    The purpose of this qualitative study was to examine how district leaders of two school systems in the USA describe an effective principal. Membership categorisation analysis revealed that district leaders believed an effective principal had four major categories of characteristics: (1) documented characteristics (having a track record and being a…

  1. A System for Describing and Evaluating Criterion-Referenced Tests.

    ERIC Educational Resources Information Center

    Kosecoff, Jacqueline; And Others

    There are, at present, a number of tests that are labeled criterion referenced. These tests vary considerably in format, design, analysis, and function. In order to provide an efficient and objective procedure for describing, assessing, and comparing these measures, the Criterion Referenced Test Description and Evaluation (CRTDE) rating system was…

  2. Learning Communities and Community Development: Describing the Process.

    ERIC Educational Resources Information Center

    Moore, Allen B.; Brooks, Rusty

    2000-01-01

    Describes features of learning communities: they transform themselves, share wisdom and recognition, bring others in, and share results. Provides the case example of the Upper Savannah River Economic Coalition. Discusses actions of learning communities, barriers to their development, and future potential. (SK)

  3. Describing Acupuncture: A New Challenge for Technical Communicators.

    ERIC Educational Resources Information Center

    Karanikas, Marianthe

    1997-01-01

    Considers acupuncture as an increasingly popular alternative medical therapy, but difficult to describe in technical communication. Notes that traditional Chinese medical explanations of acupuncture are unscientific, and that scientific explanations of acupuncture are inconclusive. Finds that technical communicators must translate acupuncture for…

  4. An Evolving Framework for Describing Student Engagement in Classroom Activities

    ERIC Educational Resources Information Center

    Azevedo, Flavio S.; diSessa, Andrea A.; Sherin, Bruce L.

    2012-01-01

    Student engagement in classroom activities is usually described as a function of factors such as human needs, affect, intention, motivation, interests, identity, and others. We take a different approach and develop a framework that models classroom engagement as a function of students' "conceptual competence" in the "specific content" (e.g., the…

  5. New North American Chrysauginae (Pyralidae) described by Cashatt (1968)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The dissertation entitled “Revision of the Chrysauginae of North America” included new taxa that were never published and do not meet the requirements for availability by the International Code of Nomenclature. Therefore, the following taxa from this dissertation are described and illustrated: Arta ...

  6. Method for describing fractures in subterranean earth formations

    DOEpatents

    Shuck, Lowell Z.

    1977-01-01

    The configuration and directional orientation of natural or induced fractures in subterranean earth formations are described by introducing a liquid explosive into the fracture, detonating the explosive, and then monitoring the resulting acoustic emissions with strategically placed acoustic sensors as the explosion propagates through the fracture at a known rate.

  7. Interpersonal Problems of People Who Describe Themselves as Lonely.

    ERIC Educational Resources Information Center

    French, Rita de Sales; Horowitz, Leonard M.

    1979-01-01

    The complaint "I am lonely" summarizes specific interpersonal difficulties in socializing. The UCLA Loneliness Scale identifies lonely and not-lonely students who described their major interpersonal problems by performing a Q-sort with a standardized set of problems. Results show that lonely people consistently report problems of…

  8. The Prototype as a Conceptual Device for Describing Loneliness.

    ERIC Educational Resources Information Center

    Horowitz, Leonard M.

    A prototype is a theoretical standard against which real people can be evaluated. To derive a prototype of a lonely person, 40 students were asked to describe a lonely person whom they knew. All descriptions were studied by judges who formed a final listing and frequency of all identified features. The 18 features which formed the prototype fell…

  9. Describing Soils: Calibration Tool for Teaching Soil Rupture Resistance

    ERIC Educational Resources Information Center

    Seybold, C. A.; Harms, D. S.; Grossman, R. B.

    2009-01-01

    Rupture resistance is a measure of the strength of a soil to withstand an applied stress or resist deformation. In soil survey, during routine soil descriptions, rupture resistance is described for each horizon or layer in the soil profile. The lower portion of the rupture resistance classes are assigned based on rupture between thumb and…

  10. College Students' Judgment of Others Based on Described Eating Pattern

    ERIC Educational Resources Information Center

    Pearson, Rebecca; Young, Michael

    2008-01-01

    Background: The literature available on attitudes toward eating patterns and people choosing various foods suggests the possible importance of "moral" judgments and desirable personality characteristics associated with the described eating patterns. Purpose: This study was designed to replicate and extend a 1993 study of college students'…

  11. Superintendents Describe Their Leadership Styles: Implications for Practice

    ERIC Educational Resources Information Center

    Bird, James J.; Wang, Chuang

    2013-01-01

    Superintendents from eight southeastern United States school districts self-described their leadership styles across the choices of autocratic, laissez-faire, democratic, situational, servant, or transformational. When faced with this array of choices, the superintendents chose with arguable equitableness, indicating that successful leaders can…

  12. 27 CFR 19.355 - Labels describing the spirits.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Labels describing the spirits. 19.355 Section 19.355 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Processing of Distilled Spirits...

  13. 27 CFR 19.355 - Labels describing the spirits.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Labels describing the spirits. 19.355 Section 19.355 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL DISTILLED SPIRITS PLANTS Processing of Distilled Spirits...

  14. 27 CFR 19.355 - Labels describing the spirits.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Labels describing the spirits. 19.355 Section 19.355 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Processing of Distilled Spirits...

  15. 27 CFR 19.355 - Labels describing the spirits.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Labels describing the spirits. 19.355 Section 19.355 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL DISTILLED SPIRITS PLANTS Processing of Distilled Spirits...

  16. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  17. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  18. How Accurate Can Enrollment Forecasting Be?

    ERIC Educational Resources Information Center

    Shaw, Robert C.

    1980-01-01

    After briefly describing several methods of projecting enrollments, cites research indicating that the cohort survival method is best used as a relatively short-range forecast where in-migration and out-migration ratios are expected to remain fairly stable or to change at the same rate as they have in the recent past. (Author/IRT)

  19. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  20. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  1. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS.

    PubMed

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.

  2. A model describing vestibular detection of body sway motion.

    NASA Technical Reports Server (NTRS)

    Nashner, L. M.

    1971-01-01

    An experimental technique was developed which facilitated the formulation of a quantitative model describing vestibular detection of body sway motion in a postural response mode. All cues, except vestibular ones, which gave a subject an indication that he was beginning to sway, were eliminated using a specially designed two-degree-of-freedom platform; body sway was then induced and resulting compensatory responses at the ankle joints measured. Hybrid simulation compared the experimental results with models of the semicircular canals and utricular otolith receptors. Dynamic characteristics of the resulting canal model compared closely with characteristics of models which describe eye movement and subjective responses to body rotational motions. The average threshold level, in the postural response mode, however, was considerably lower. Analysis indicated that the otoliths probably play no role in the initial detection of body sway motion.

  3. A gene feature enumeration approach for describing HLA allele polymorphism.

    PubMed

    Mack, Steven J

    2015-12-01

    HLA genotyping via next generation sequencing (NGS) poses challenges for the use of HLA allele names to analyze and discuss sequence polymorphism. NGS will identify many new synonymous and non-coding HLA sequence variants. Allele names identify the types of nucleotide polymorphism that define an allele (non-synonymous, synonymous and non-coding changes), but do not describe how polymorphism is distributed among the individual features (the flanking untranslated regions, exons and introns) of a gene. Further, HLA alleles cannot be named in the absence of antigen-recognition domain (ARD) encoding exons. Here, a system for describing HLA polymorphism in terms of HLA gene features (GFs) is proposed. This system enumerates the unique nucleotide sequences for each GF in an HLA gene, and records these in a GF enumeration notation that allows both more granular dissection of allele-level HLA polymorphism and the discussion and analysis of GFs in the absence of ARD-encoding exon sequences.

  4. Oculoectodermal syndrome: twentieth described case with new manifestations*

    PubMed Central

    Figueiras, Daniela de Almeida; Leal, Deborah Maria de Castro Barbosa; Kozmhinsky, Valter; Querino, Marina Coutinho Domingues; Regueira, Marina Genesia da Silva; Studart, Maria Gabriela de Morais

    2016-01-01

    Oculoectodermal syndrome is a rare disease characterized by the association of aplasia cutis congenita, epibulbar dermoids, and other abnormalities. This report describes the twentieth case of the disease. We report a 4-year-old female child who presented with the classical features of the syndrome: aplasia cutis congenita and epibulbar dermoids. Our case expands the clinical spectrum of the disease to include: diffuse hyperpigmentation (some following the Blaschko´s lines); hypopigmented skin areas on the trunk; arachnoid cyst on the right fronto-parietal border; rounded left side of the hippocampus; and dermoid cyst underlying the bulb-medullary transition. Our patient also reported infantile hemangioma on the right wrist and verrucous hemangioma on the left leg, the latter not previously described in the literature.

  5. New model describing the dynamical behaviour of penetration rates

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru; Minagawa, Hiroe; Chiba, Michiko

    2013-02-01

    We propose a hierarchical logistic equation as a model to describe the dynamical behaviour of a penetration rate of a prevalent stuff. In this model, a memory, how many people who already possess it a person who does not process it yet met, is considered, which does not exist in the logistic model. As an application, we apply this model to iPod sales data, and find that this model can approximate the data much better than the logistic equation.

  6. An alternative to soil taxonomy for describing key soil characteristics

    USGS Publications Warehouse

    Duniway, Michael C.; Miller, Mark E.; Brown, Joel R.; Toevs, Gordon

    2013-01-01

    is not a simple task. Furthermore, because the US system of soil taxonomy is not applied universally, its utility as a means for effectively describing soil characteristics to readers in other countries is limited. Finally, and most importantly, even at the finest level of soil classification there are often large within-taxa variations in critical properties that can determine ecosystem responses to drivers such as climate and land-use change.

  7. "Car seat dermatitis": a newly described form of contact dermatitis.

    PubMed

    Ghali, Fred E

    2011-01-01

    Over the last several years, our clinic has documented an increasing trend of contact dermatitis presenting in areas that are in direct contact with certain types of car seats composed of a shiny, nylon-like material. Our practice has encountered these cases in both atopic and nonatopic infants, with a seasonal predilection for the warmer months. This brief report highlights some of the key features of this condition and alerts the clinician to this newly described form of contact dermatitis.

  8. Polychaete species (Annelida) described from the Philippine and China Seas.

    PubMed

    Salazar-Vallejo, Sergio I; Carrera-Parra, Luis F; Muir, Alexander I; De León-González, Jesús Angel; Piotrowski, Christina; Sato, Masanori

    2014-07-30

    The South China and Philippine Seas are among the most diverse regions in the Western Pacific. Although there are several local polychaete checklists available, there is none comprising the whole of this region. Presented herein is a comprehensive list of the original names of all polychaete species described from the region. The list contains 1037 species, 345 genera and 60 families; the type locality, type depository, and information regarding synonymy are presented for each species. 

  9. A new study of describing the reliability of GNSS Network RTK positioning with the use of quality indicators

    NASA Astrophysics Data System (ADS)

    Prochniewicz, D.; Szpunar, R.; Walo, J.

    2017-01-01

    The method of precise GNSS positioning using corrections from a network of reference stations, the so-called Network RTK, is currently the most accurate real time kinematic positioning method. The reliability of this method is largely dependent on the accuracy of determination of network ionospheric and geometric corrections (taking into account the tropospheric refraction and orbit errors). There are many indexes describing the reliability of Network RTK positioning in the aspect of the accuracy of modelling these errors. The so-called solution quality indicators are used for this purpose. They are parameters determined in the central network pre-processing which provide quantitative information regarding the predicted reliability of the positioning in an area encompassed by a network of reference stations. Unfortunately, their interpretation is hindered due to the lack of connection with basic parameters describing precise positioning quality, i.e. the correctness of the carrier phase ambiguity resolution and the accuracy of the rover position. This study presents a new approach to the design of quality indicators for the Network RTK method, based on a quantitative description of two parameters—solution accuracy and availability. The presented method is based on the existing parameters describing the probability of the correct fixing of ambiguities and the estimation of the fixed baseline solution accuracy. However, a stochastic model of observation, taking into account the accuracy of network corrections is used for these calculations. The proposed method enables a full account of all parameters affecting the reliability of positioning, which is not possible with the currently applied methods. Numerical tests of the new indicators carried out for a part of the regional reference stations network confirmed the effectiveness of this approach. The proposed indicators provide much more clear identification of the time periods for which the reliability of the

  10. Methods for accurate analysis of galaxy clustering on non-linear scales

    NASA Astrophysics Data System (ADS)

    Vakili, Mohammadjavad

    2017-01-01

    Measurements of galaxy clustering with the low-redshift galaxy surveys provide sensitive probe of cosmology and growth of structure. Parameter inference with galaxy clustering relies on computation of likelihood functions which requires estimation of the covariance matrix of the observables used in our analyses. Therefore, accurate estimation of the covariance matrices serves as one of the key ingredients in precise cosmological parameter inference. This requires generation of a large number of independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast method based on low-resolution N-body simulations and approximate galaxy biasing technique for generating mock catalogs. Using a reference catalog that was created using the high resolution Big-MultiDark N-body simulation, we show that our method is able to produce catalogs that describe galaxy clustering at a percentage-level accuracy down to highly non-linear scales in both real-space and redshift-space.In most large-scale structure analyses, modeling of galaxy bias on non-linear scales is performed assuming a halo model. Clustering of dark matter halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies assume that halo mass alone is sufficient in characterizing the connection between galaxies and halos. However, modeling of galaxy bias can face systematic effects if the number of galaxies are correlated with other halo properties. Using the Small MultiDark-Planck high resolution N-body simulation and the clustering measurements of Sloan Digital Sky Survey DR7 main galaxy sample, we investigate the extent to which the dependence of galaxy bias on halo concentration can improve our modeling of galaxy clustering.

  11. A fractional derivative model to describe arterial viscoelasticity.

    PubMed

    Craiem, Damian; Armentano, Ricardo L

    2007-01-01

    Arterial viscoelasticity can be described with a complex modulus (E*) in the frequency domain. In arteries, E* presents a power-law response with a plateau for higher frequencies. Constitutive models based on a combination of purely elastic and viscous elements can be represented with integer order differential equations but show several limitations. Recently, fractional derivative models with fewer parameters have proven to be efficient in describing rheological tissues. A new element, called "spring-pot", that interpolates between springs and dashpots is incorporated. Starting with a Voigt model, we proposed two fractional alternative models with one and two spring-pots. The three models were tested in an anesthetized sheep in a control state and during smooth muscle activation. A least squares method was used to fit E*. Local activation induced a vascular constriction with no pressure changes. The E* results confirmed the steep increase from static to dynamic values and a plateau in the range 2-30 Hz, coherent with fractional model predictions. Activation increased E*, affecting its real and imaginary parts separately. Only the model with two spring-pots correctly followed this behavior with the best performance in terms of least squares errors. In a context where activation separately modifies E*, this alternative model should be considered in describing arterial viscoelasticity in vivo.

  12. How to describe genes: enlightenment from the quaternary number system.

    PubMed

    Ma, Bin-Guang

    2007-01-01

    As an open problem, computational gene identification has been widely studied, and many gene finders (software) become available today. However, little attention has been given to the problem of describing the common features of known genes in databanks to transform raw data into human understandable knowledge. In this paper, we draw attention to the task of describing genes and propose a trial implementation by treating DNA sequences as quaternary numbers. Under such a treatment, the common features of genes can be represented by a "position weight function", the core concept for a number system. In principle, the "position weight function" can be any real-valued function. In this paper, by approximating the function using trigonometric functions, some characteristic parameters indicating single nucleotide periodicities were obtained for the bacteria Escherichia coli K12's genome and the eukaryote yeast's genome. As a byproduct of this approach, a single-nucleotide-level measure is derived that complements codon-based indexes in describing the coding quality and expression level of an open reading frame (ORF). The ideas presented here have the potential to become a general methodology for biological sequence analysis.

  13. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  14. Accurate measurement of streamwise vortices using dual-plane PIV

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Breuer, Kenneth S.

    2012-11-01

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers.

  15. CT Scan Method Accurately Assesses Humeral Head Retroversion

    PubMed Central

    Boileau, P.; Mazzoleni, N.; Walch, G.; Urien, J. P.

    2008-01-01

    Humeral head retroversion is not well described with the literature controversial regarding accuracy of measurement methods and ranges of normal values. We therefore determined normal humeral head retroversion and assessed the measurement methods. We measured retroversion in 65 cadaveric humeri, including 52 paired specimens, using four methods: radiographic, computed tomography (CT) scan, computer-assisted, and direct methods. We also assessed the distance between the humeral head central axis and the bicipital groove. CT scan methods accurately measure humeral head retroversion, while radiographic methods do not. The retroversion with respect to the transepicondylar axis was 17.9° and 21.5° with respect to the trochlear tangent axis. The difference between the right and left humeri was 8.9°. The distance between the central axis of the humeral head and the bicipital groove was 7.0 mm and was consistent between right and left humeri. Humeral head retroversion may be most accurately obtained using the patient’s own anatomic landmarks or, if not, identifiable retroversion as measured by those landmarks on contralateral side or the bicipital groove. PMID:18264854

  16. A more accurate nonequilibrium air radiation code - NEQAIR second generation

    NASA Technical Reports Server (NTRS)

    Moreau, Stephane; Laux, Christophe O.; Chapman, Dean R.; Maccormack, Robert W.

    1992-01-01

    Two experiments, one an equilibrium flow in a plasma torch at Stanford, the other a nonequilibrium flow in a SDIO/IST Bow-Shock-Ultra-Violet missile flight, have provided the basis for modifying, enhancing, and testing the well-known radiation code, NEQAIR. The original code, herein termed NEQAIR1, lacked computational efficiency, accurate data for some species and the flexibility to handle a variety of species. The modified code, herein termed NEQAIR2, incorporates recent findings in the spectroscopic and radiation models. It can handle any number of species and radiative bands in a gas whose thermodynamic state can be described by up to four temperatures. It provides a new capability of computing very fine spectra in a reasonable CPU time, while including transport phenomena along the line of sight and the characteristics of instruments that were used in the measurements. Such a new tool should allow more accurate testing and diagnosis of the different physical models used in numerical simulations of radiating, low density, high energy flows.

  17. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  18. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  19. Accurate thermoplasmonic simulation of metallic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  20. Accurate method for computing correlated color temperature.

    PubMed

    Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier

    2016-06-27

    For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 106 K.

  1. Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.

    PubMed

    Ganyecz, Ádám; Kállay, Mihály; Csontos, József

    2017-02-09

    An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.

  2. Accurate methods for large molecular systems.

    PubMed

    Gordon, Mark S; Mullin, Jonathan M; Pruitt, Spencer R; Roskop, Luke B; Slipchenko, Lyudmila V; Boatz, Jerry A

    2009-07-23

    Three exciting new methods that address the accurate prediction of processes and properties of large molecular systems are discussed. The systematic fragmentation method (SFM) and the fragment molecular orbital (FMO) method both decompose a large molecular system (e.g., protein, liquid, zeolite) into small subunits (fragments) in very different ways that are designed to both retain the high accuracy of the chosen quantum mechanical level of theory while greatly reducing the demands on computational time and resources. Each of these methods is inherently scalable and is therefore eminently capable of taking advantage of massively parallel computer hardware while retaining the accuracy of the corresponding electronic structure method from which it is derived. The effective fragment potential (EFP) method is a sophisticated approach for the prediction of nonbonded and intermolecular interactions. Therefore, the EFP method provides a way to further reduce the computational effort while retaining accuracy by treating the far-field interactions in place of the full electronic structure method. The performance of the methods is demonstrated using applications to several systems, including benzene dimer, small organic species, pieces of the alpha helix, water, and ionic liquids.

  3. Accurate equilibrium structures for piperidine and cyclohexane.

    PubMed

    Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter

    2015-03-05

    Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.

  4. Accurate, reproducible measurement of blood pressure.

    PubMed Central

    Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791

  5. Fast and accurate exhaled breath ammonia measurement.

    PubMed

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H

    2014-06-11

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  6. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  7. Flexible receiver accurately tracks multiple threats

    NASA Astrophysics Data System (ADS)

    Browne, Jack

    1988-09-01

    The design and performance of a broadband (0.03-40-GHz) receiver system for electronic-surveillance applications are described. The complete superheterodyne receiver system comprises a control and display unit, a scan display, an equipment frame, and a choice of readily interchangeable RF tuner and demodulator modules with narrow or broad instantaneous bandwidths and BITE capability. Photographs, block diagrams, and tables listing the performance parameters of the modules are provided.

  8. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  9. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    NASA Astrophysics Data System (ADS)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  10. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  11. Method for Accurate Surface Temperature Measurements During Fast Induction Heating

    NASA Astrophysics Data System (ADS)

    Larregain, Benjamin; Vanderesse, Nicolas; Bridier, Florent; Bocher, Philippe; Arkinson, Patrick

    2013-07-01

    A robust method is proposed for the measurement of surface temperature fields during induction heating. It is based on the original coupling of temperature-indicating lacquers and a high-speed camera system. Image analysis tools have been implemented to automatically extract the temporal evolution of isotherms. This method was applied to the fast induction treatment of a 4340 steel spur gear, allowing the full history of surface isotherms to be accurately documented for a sequential heating, i.e., a medium frequency preheating followed by a high frequency final heating. Three isotherms, i.e., 704, 816, and 927°C, were acquired every 0.3 ms with a spatial resolution of 0.04 mm per pixel. The information provided by the method is described and discussed. Finally, the transformation temperature Ac1 is linked to the temperature on specific locations of the gear tooth.

  12. Accurate and efficient maximal ball algorithm for pore network extraction

    NASA Astrophysics Data System (ADS)

    Arand, Frederick; Hesser, Jürgen

    2017-04-01

    The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.

  13. Whipple Observations

    NASA Astrophysics Data System (ADS)

    Trangsrud, A.

    2015-12-01

    The solar system that we know today was shaped dramatically by events in its dynamic formative years. These events left their signatures at the distant frontier of the solar system, in the small planetesimal relics that populate the vast Oort Cloud, the Scattered Disk, and the Kuiper Belt. To peer in to the history and evolution of our solar system, the Whipple mission will survey small bodies in the large volume that begins beyond the orbit of Neptune and extends out to thousands of AU. Whipple detects these objects when they occult distant stars. The distance and size of the occulting object is reconstructed from well-understood diffraction effects in the object's shadow. Whipple will observe tens of thousands of stars simultaneously with high observing efficiency, accumulating roughly a billion "star-hours" of observations over its mission life. Here we describe the Whipple observing strategy, including target selection and scheduling.

  14. Experimental verification of a model describing the intensity distribution from a single mode optical fiber

    SciTech Connect

    Moro, Erik A; Puckett, Anthony D; Todd, Michael D

    2011-01-24

    The intensity distribution of a transmission from a single mode optical fiber is often approximated using a Gaussian-shaped curve. While this approximation is useful for some applications such as fiber alignment, it does not accurately describe transmission behavior off the axis of propagation. In this paper, another model is presented, which describes the intensity distribution of the transmission from a single mode optical fiber. A simple experimental setup is used to verify the model's accuracy, and agreement between model and experiment is established both on and off the axis of propagation. Displacement sensor designs based on the extrinsic optical lever architecture are presented. The behavior of the transmission off the axis of propagation dictates the performance of sensor architectures where large lateral offsets (25-1500 {micro}m) exist between transmitting and receiving fibers. The practical implications of modeling accuracy over this lateral offset region are discussed as they relate to the development of high-performance intensity modulated optical displacement sensors. In particular, the sensitivity, linearity, resolution, and displacement range of a sensor are functions of the relative positioning of the sensor's transmitting and receiving fibers. Sensor architectures with high combinations of sensitivity and displacement range are discussed. It is concluded that the utility of the accurate model is in its predicative capability and that this research could lead to an improved methodology for high-performance sensor design.

  15. HERMES: A Model to Describe Deformation, Burning, Explosion, and Detonation

    SciTech Connect

    Reaugh, J E

    2011-11-22

    HERMES (High Explosive Response to MEchanical Stimulus) was developed to fill the need for a model to describe an explosive response of the type described as BVR (Burn to Violent Response) or HEVR (High Explosive Violent Response). Characteristically this response leaves a substantial amount of explosive unconsumed, the time to reaction is long, and the peak pressure developed is low. In contrast, detonations characteristically consume all explosive present, the time to reaction is short, and peak pressures are high. However, most of the previous models to describe explosive response were models for detonation. The earliest models to describe the response of explosives to mechanical stimulus in computer simulations were applied to intentional detonation (performance) of nearly ideal explosives. In this case, an ideal explosive is one with a vanishingly small reaction zone. A detonation is supersonic with respect to the undetonated explosive (reactant). The reactant cannot respond to the pressure of the detonation before the detonation front arrives, so the precise compressibility of the reactant does not matter. Further, the mesh sizes that were practical for the computer resources then available were large with respect to the reaction zone. As a result, methods then used to model detonations, known as {beta}-burn or program burn, were not intended to resolve the structure of the reaction zone. Instead, these methods spread the detonation front over a few finite-difference zones, in the same spirit that artificial viscosity is used to spread the shock front in inert materials over a few finite-difference zones. These methods are still widely used when the structure of the reaction zone and the build-up to detonation are unimportant. Later detonation models resolved the reaction zone. These models were applied both to performance, particularly as it is affected by the size of the charge, and to situations in which the stimulus was less than that needed for reliable

  16. Describing, Analysing and Judging Language Codes in Cinematic Discourse

    ERIC Educational Resources Information Center

    Richardson, Kay; Queen, Robin

    2012-01-01

    In this short commentary piece, the authors stand back from many of the specific details in the seven papers which constitute the special issue, and offer some observations which attempt to identify and assess points of similarity and difference amongst them, under a number of different general headings. To the extent that the "sociolinguistics of…

  17. Describe Your Feelings: Body Illusion Related to Alexithymia in Adolescence

    PubMed Central

    Georgiou, Eleana; Mai, Sandra; Pollatos, Olga

    2016-01-01

    Objective: Having access to bodily signals is known to be crucial for differentiating the self from others and coping with negative feelings. The interplay between bodily and emotional processes develops in adolescence, where vulnerability is high, as negative affect states often occur, that could hamper the integration of bodily input into the self. Aim of the present study in healthy adolescents was to examine, whether a disturbed emotional awareness, described by the alexithymic construct, could trigger a higher malleability in the sense of body-ownership. Methods: Fifty-four healthy adolescents aged between 12 to 17 years participated in this study. The Strength and Difficulties Questionnaire (SDQ) and the Screening psychischer Störungen im Jugendalter were used to assess emotional distress and conduct problems. Alexithymia was assessed by the TAS-20. The rubber hand illusion was implemented for examining the malleability of body-ownership. Results: A higher body illusion was found to be connected with “difficulties in describing feelings”. Moreover, a higher degree of self-reported conduct and emotional problems as assessed by the SDQ were associated with a more pronounced body illusion. Further findings revealed an association between emotional distress and the emotional alexithymia subscales “difficulties in identifying feelings” and “difficulties in describing feelings”. Conclusion: Our findings emphasize a close link between the sense of body-ownership and emotional awareness as assessed by emotional facets of the alexithymic trait. We suggest that in adolescents with higher malleability of body-ownership, a vicious circle might occur where affect and integration of different proprioceptive signals regarding the body become more entangled. PMID:27840618

  18. A Physiology-Based Model Describing Heterogeneity in Glucose Metabolism

    PubMed Central

    Maas, Anne H.; Rozendaal, Yvonne J. W.; van Pul, Carola; Hilbers, Peter A. J.; Cottaar, Ward J.; Haak, Harm R.; van Riel, Natal A. W.

    2014-01-01

    Background: Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. Method: The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. Results: All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. Conclusion: We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. PMID:25526760

  19. A proposal to describe a phenomenon of expanding language

    NASA Astrophysics Data System (ADS)

    Swietorzecka, Kordula

    Changes of knowledge, convictions or beliefs are subjects of interest in frame of so called epistemic logic. There are various proposed descriptions of a process (or its results) in which so a called agent may invent certain changes in a set of sentences that he had already chosen as a point of his knowledge, convictions or beliefs (and this is also considered in case of many agents). In the presented paper we are interested in the changeability of an agent's language which is by its own independent from already mentioned changes. Modern epistemic formalizations assume that the agent uses a fixed (and so we could say: static) language in which he expresses his various opinions which may change. Our interest is to simulate a situation when a language is extended by adding to it new expressions which were not known by the agent so he couldn't even consider them as subjects of his opinions. Actually such a phenomenon happens both in natural and scientific languages. Let us mention a fact of expanding languages in process of learning or in result of getting of new data about some described domain. We propose a simple idealization of extending sentential language used by one agent. Actually the language is treated as a family of so called n-languages which get some epistemic interpretation. Proposed semantics enables us to distinguish between two different types of changes - these which occur because of changing agent's convictions about logical values of some n-sentences - we describe them using one place operator C to be read it changes that - and changes that consist in increasing the level of n-language by adding to it new expressions. However the second type of change - symbolized by variable G - may be also considered independently of the first one. The logical frame of our considerations comes from and it was originally used to describe Aristotelian theory of substantial changes. This time we apply the mentioned logic in epistemology.

  20. A new taxonomy for describing and defining adherence to medications

    PubMed Central

    Vrijens, Bernard; De Geest, Sabina; Hughes, Dyfrig A; Przemyslaw, Kardas; Demonceau, Jenny; Ruppar, Todd; Dobbels, Fabienne; Fargher, Emily; Morrison, Valerie; Lewek, Pawel; Matyjaszczyk, Michal; Mshelia, Comfort; Clyne, Wendy; Aronson, Jeffrey K; Urquhart, J

    2012-01-01

    Interest in patient adherence has increased in recent years, with a growing literature that shows the pervasiveness of poor adherence to appropriately prescribed medications. However, four decades of adherence research has not resulted in uniformity in the terminology used to describe deviations from prescribed therapies. The aim of this review was to propose a new taxonomy, in which adherence to medications is conceptualized, based on behavioural and pharmacological science, and which will support quantifiable parameters. A systematic literature review was performed using MEDLINE, EMBASE, CINAHL, the Cochrane Library and PsycINFO from database inception to 1 April 2009. The objective was to identify the different conceptual approaches to adherence research. Definitions were analyzed according to time and methodological perspectives. A taxonomic approach was subsequently derived, evaluated and discussed with international experts. More than 10 different terms describing medication-taking behaviour were identified through the literature review, often with differing meanings. The conceptual foundation for a new, transparent taxonomy relies on three elements, which make a clear distinction between processes that describe actions through established routines (‘Adherence to medications’, ‘Management of adherence’) and the discipline that studies those processes (‘Adherence-related sciences’). ‘Adherence to medications’ is the process by which patients take their medication as prescribed, further divided into three quantifiable phases: ‘Initiation’, ‘Implementation’ and ‘Discontinuation’. In response to the proliferation of ambiguous or unquantifiable terms in the literature on medication adherence, this research has resulted in a new conceptual foundation for a transparent taxonomy. The terms and definitions are focused on promoting consistency and quantification in terminology and methods to aid in the conduct, analysis and interpretation of

  1. Effect of Display Color on Pilot Performance and Describing Functions

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1997-01-01

    A study has been conducted with the full-spectrum, calligraphic, computer-generated display system to determine the effect of chromatic content of the visual display upon pilot performance during the landing approach maneuver. This study utilizes a new digital chromatic display system, which has previously been shown to improve the perceived fidelity of out-the-window display scenes, and presents the results of an experiment designed to determine the effects of display color content by the measurement of both vertical approach performance and pilot-describing functions. This method was selected to more fully explore the effects of visual color cues used by the pilot. Two types of landing approaches were made: dynamic and frozen range, with either a landing approach scene or a perspective array display. The landing approach scene was presented with either red runway lights and blue taxiway lights or with the colors reversed, and the perspective array with red lights, blue lights, or red and blue lights combined. The vertical performance measures obtained in this experiment indicated that the pilots performed best with the blue and red/blue displays. and worst with the red displays. The describing-function system analysis showed more variation with the red displays. The crossover frequencies were lowest with the red displays and highest with the combined red/blue displays, which provided the best overall tracking, performance. Describing-function performance measures, vertical performance measures, and pilot opinion support the hypothesis that specific colors in displays can influence the pilots' control characteristics during the final approach.

  2. Using UMLS metathesaurus concepts to describe medical images: dermatology vocabulary.

    PubMed

    Woods, James W; Sneiderman, Charles A; Hameed, Kamran; Ackerman, Michael J; Hatton, Charlie

    2006-01-01

    Web servers at the National Library of Medicine (NLM) displayed images of ten skin lesions to practicing dermatologists and provided an online form for capturing text they used to describe the pictures. The terms were submitted to the UMLS Metathesaurus (Meta). Concepts retrieved, their semantic types, definitions and synonyms, were returned to each subject in a second web-based form. Subjects rated the concepts against their own descriptive terms. They submitted 825 terms, 346 of which were unique and 300 mapped to UMLS concepts. The dermatologists rated 295 concepts as 'Exact Match' and they accomplished both tasks in about 30 min.

  3. Failure of random matrix theory to correctly describe quantum dynamics.

    PubMed

    Kottos, T; Cohen, D

    2001-12-01

    Consider a classically chaotic system that is described by a Hamiltonian H(0). At t=0 the Hamiltonian undergoes a sudden change (H)0-->H. We consider the quantum-mechanical spreading of the evolving energy distribution, and argue that it cannot be analyzed using a conventional random-matrix theory (RMT) approach. Conventional RMT can be trusted only to the extent that it gives trivial results that are implied by first-order perturbation theory. Nonperturbative effects are sensitive to the underlying classical dynamics, and therefore the Planck's over 2 pi-->0 behavior for effective RMT models is strikingly different from the correct semiclassical limit.

  4. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  5. Accurate glucose detection in a small etalon

    NASA Astrophysics Data System (ADS)

    Martini, Joerg; Kuebler, Sebastian; Recht, Michael; Torres, Francisco; Roe, Jeffrey; Kiesel, Peter; Bruce, Richard

    2010-02-01

    We are developing a continuous glucose monitor for subcutaneous long-term implantation. This detector contains a double chamber Fabry-Perot-etalon that measures the differential refractive index (RI) between a reference and a measurement chamber at 850 nm. The etalon chambers have wavelength dependent transmission maxima which dependent linearly on the RI of their contents. An RI difference of ▵n=1.5.10-6 changes the spectral position of a transmission maximum by 1pm in our measurement. By sweeping the wavelength of a single-mode Vertical-Cavity-Surface-Emitting-Laser (VCSEL) linearly in time and detecting the maximum transmission peaks of the etalon we are able to measure the RI of a liquid. We have demonstrated accuracy of ▵n=+/-3.5.10-6 over a ▵n-range of 0 to 1.75.10-4 and an accuracy of 2% over a ▵nrange of 1.75.10-4 to 9.8.10-4. The accuracy is primarily limited by the reference measurement. The RI difference between the etalon chambers is made specific to glucose by the competitive, reversible release of Concanavalin A (ConA) from an immobilized dextran matrix. The matrix and ConA bound to it, is positioned outside the optical detection path. ConA is released from the matrix by reacting with glucose and diffuses into the optical path to change the RI in the etalon. Factors such as temperature affect the RI in measurement and detection chamber equally but do not affect the differential measurement. A typical standard deviation in RI is +/-1.4.10-6 over the range 32°C to 42°C. The detector enables an accurate glucose specific concentration measurement.

  6. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.

  7. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  8. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  9. An accurate metric for the spacetime around rotating neutron stars.

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-01-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parameterisation of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a 3-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  10. Extended nonlinear feedback model for describing episodes of high inflation

    NASA Astrophysics Data System (ADS)

    Szybisz, Martín A.; Szybisz, Leszek

    2017-01-01

    An extension of the nonlinear feedback (NLF) formalism to describe regimes of hyper- and high-inflation in economy is proposed in the present work. In the NLF model the consumer price index (CPI) exhibits a finite time singularity of the type 1 /(tc - t) (1 - β) / β, with β > 0, predicting a blow up of the economy at a critical time tc. However, this model fails in determining tc in the case of weak hyperinflation regimes like, e.g., that occurred in Israel. To overcome this trouble, the NLF model is extended by introducing a parameter γ, which multiplies all terms with past growth rate index (GRI). In this novel approach the solution for CPI is also analytic being proportional to the Gaussian hypergeometric function 2F1(1 / β , 1 / β , 1 + 1 / β ; z) , where z is a function of β, γ, and tc. For z → 1 this hypergeometric function diverges leading to a finite time singularity, from which a value of tc can be determined. This singularity is also present in GRI. It is shown that the interplay between parameters β and γ may produce phenomena of multiple equilibria. An analysis of the severe hyperinflation occurred in Hungary proves that the novel model is robust. When this model is used for examining data of Israel a reasonable tc is got. High-inflation regimes in Mexico and Iceland, which exhibit weaker inflations than that of Israel, are also successfully described.

  11. In their own words: describing Canadian physician leadership.

    PubMed

    Snell, Anita J; Dickson, Graham; Wirtzfeld, Debrah; Van Aerde, John

    2016-07-04

    Purpose This is the first study to compile statistical data to describe the functions and responsibilities of physicians in formal and informal leadership roles in the Canadian health system. This mixed-methods research study offers baseline data relative to this purpose, and also describes physician leaders' views on fundamental aspects of their leadership responsibility. Design/methodology/approach A survey with both quantitative and qualitative fields yielded 689 valid responses from physician leaders. Data from the survey were utilized in the development of a semi-structured interview guide; 15 physician leaders were interviewed. Findings A profile of Canadian physician leadership has been compiled, including demographics; an outline of roles, responsibilities, time commitments and related compensation; and personal factors that support, engage and deter physicians when considering taking on leadership roles. The role of health-care organizations in encouraging and supporting physician leadership is explicated. Practical implications The baseline data on Canadian physician leaders create the opportunity to determine potential steps for improving the state of physician leadership in Canada; and health-care organizations are provided with a wealth of information on how to encourage and support physician leaders. Using the data as a benchmark, comparisons can also be made with physician leadership as practiced in other nations. Originality/value There are no other research studies available that provide the depth and breadth of detail on Canadian physician leadership, and the embedded recommendations to health-care organizations are informed by this in-depth knowledge.

  12. Onset of spatio temporal disorder described by directed percolation

    NASA Astrophysics Data System (ADS)

    Wester, Tom; Traphan, Dominik; Gülker, Gerd; Peinke, Joachim; AG TWiSt Team

    2016-11-01

    The energy transport and mixing behavior of a fluid strongly depends on the state of the flow. These properties change drastically if the flow changes from laminar to turbulent state. This transition is a very complex and highly unsteady phenomenon, which is not fully understood up to now. The biggest problem is the characterization of the onset of spatio temporal disorder. This means that turbulent spots in the flow field irregularly spread or decay on their way downstream. In this presentation we will show that this critical behavior of turbulent spreading in the flow can be described by the directed percolation model. This approach was already used for a transitive channel flow, pipe flows or different couette flows. The charm of this model is the complete characterization of the whole transition with only a few unique exponents. In contrast to the majority of previous studies, the underlying data base of this study is acquired experimentally by high-speed Particle Image Velocimetry. Thus the evolving flow can be captured in a highly resolved spatio-temporal manner. In this way it is easily possible to determine the critical exponents which describe the transient area between laminar and turbulent flow. The results will be presented and compared to theoretical expectations. DAAD, DFG.

  13. Colour in flux: describing and printing colour in art

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna

    2008-01-01

    This presentation will describe artists, practitioners and scientists, who were interested in developing a deeper psychological, emotional and practical understanding of the human visual system who were working with wavelength, paint and other materials. From a selection of prints at The Prints and Drawings Department at Tate London, the presentation will refer to artists who were motivated by issues relating to how colour pigment was mixed and printed, to interrogate and explain colour perception and colour science, and in art, how artists have used colour to challenge the viewer and how a viewer might describe their experience of colour. The title Colour in Flux refers, not only to the perceptual effect of the juxtaposition of one colour pigment with another, but also to the changes and challenges for the print industry. In the light of screenprinted examples from the 60s and 70s, the presentation will discuss 21 st century ideas on colour and how these notions have informed the Centre for Fine Print Research's (CFPR) practical research in colour printing. The latter part of this presentation will discuss the implications for the need to change methods in mixing inks that moves away from existing colour spaces, from non intuitive colour mixing to bespoke ink sets, colour mixing approaches and colour mixing methods that are not reliant on RGB or CMYK.

  14. A Dynamical System that Describes Vein Graft Adaptation and Failure

    PubMed Central

    Garbey, Marc; Berceli, Scott A.

    2013-01-01

    Adaptation of vein bypass grafts to the mechanical stresses imposed by the arterial circulation is thought to be the primary determinant for lesion development, yet an understanding of how the various forces dictate local wall remodeling is lacking. We develop a dynamical system that summarizes the complex interplay between the mechanical environment and cell/matrix kinetics, ultimately dictating changes in the vein graft architecture. Based on a systematic mapping of the parameter space, three general remodeling response patterns are observed: 1) shear stabilized intimal thickening, 2) tension induced wall thinning and lumen expansion, and 3) tension stabilized wall thickening. Notable is our observation that the integration of multiple feedback mechanisms leads to a variety of non-linear responses that would be unanticipated by an analysis of each system component independently. This dynamic analysis supports the clinical observation that the majority of vein grafts proceed along an adaptive trajectory, where grafts dilate and mildly thicken in response to the increased tension and shear, but a small portion of the grafts demonstrate a maladaptive phenotype, where progressive inward remodeling and accentuated wall thickening lead to graft failure. PMID:23871714

  15. Describing the impact of health research: a Research Impact Framework

    PubMed Central

    Kuruvilla, Shyama; Mays, Nicholas; Pleasant, Andrew; Walt, Gill

    2006-01-01

    Background Researchers are increasingly required to describe the impact of their work, e.g. in grant proposals, project reports, press releases and research assessment exercises. Specialised impact assessment studies can be difficult to replicate and may require resources and skills not available to individual researchers. Researchers are often hard-pressed to identify and describe research impacts and ad hoc accounts do not facilitate comparison across time or projects. Methods The Research Impact Framework was developed by identifying potential areas of health research impact from the research impact assessment literature and based on research assessment criteria, for example, as set out by the UK Research Assessment Exercise panels. A prototype of the framework was used to guide an analysis of the impact of selected research projects at the London School of Hygiene and Tropical Medicine. Additional areas of impact were identified in the process and researchers also provided feedback on which descriptive categories they thought were useful and valid vis-à-vis the nature and impact of their work. Results We identified four broad areas of impact: I. Research-related impacts; II. Policy impacts; III. Service impacts: health and intersectoral and IV. Societal impacts. Within each of these areas, further descriptive categories were identified. For example, the nature of research impact on policy can be described using the following categorisation, put forward by Weiss: Instrumental use where research findings drive policy-making; Mobilisation of support where research provides support for policy proposals; Conceptual use where research influences the concepts and language of policy deliberations and Redefining/wider influence where research leads to rethinking and changing established practices and beliefs. Conclusion Researchers, while initially sceptical, found that the Research Impact Framework provided prompts and descriptive categories that helped them

  16. Strength in Numbers: Describing the Flooded Area of Isolated Wetlands

    USGS Publications Warehouse

    Lee, Terrie M.; Haag, Kim H.

    2006-01-01

    Thousands of isolated, freshwater wetlands are scattered across the karst1 landscape of central Florida. Most are small (less than 15 acres), shallow, marsh and cypress wetlands that flood and dry seasonally. Wetland health is threatened when wetland flooding patterns are altered either by human activities, such as land-use change and ground-water pumping, or by changes in climate. Yet the small sizes and vast numbers of isolated wetlands in Florida challenge our efforts to characterize them collectively as a statewide water resource. In the northern Tampa Bay area of west-central Florida alone, water levels are measured monthly in more than 400 wetlands by the Southwest Florida Water Management Distirct (SWFWMD). Many wetlands have over a decade of measurements. The usefulness of long-term monitoring of wetland water levels would greatly increase if it described not just the depth of water at a point in the wetland, but also the amount of the total wetland area that was flooded. Water levels can be used to estimate the flooded area of a wetland if the elevation contours of the wetland bottom are determined by bathymetric mapping. Despite the recognized importance of the flooded area to wetland vegetation, bathymetric maps are not available to describe the flooded areas of even a representative number of Florida's isolated wetlands. Information on the bathymetry of isolated wetlands is rare because it is labor intensive to collect the land-surface elevation data needed to create the maps. Five marshes and five cypress wetlands were studied by the U.S. Geological Survey (USGS) during 2000 to 2004 as part of a large interdisciplinary study of isolated wetlands in central Florida. The wetlands are located either in municipal well fields or on publicly owned lands (fig. 1). The 10 wetlands share similar geology and climate, but differ in their ground-water settings. All have historical water-level data and multiple vegetation surveys. A comprehensive report by Haag and

  17. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration.

  18. Accurate stone analysis: the impact on disease diagnosis and treatment.

    PubMed

    Mandel, Neil S; Mandel, Ian C; Kolbach-Mandel, Ann M

    2017-02-01

    This manuscript reviews the requirements for acceptable compositional analysis of kidney stones using various biophysical methods. High-resolution X-ray powder diffraction crystallography and Fourier transform infrared spectroscopy (FTIR) are the only acceptable methods in our labs for kidney stone analysis. The use of well-constructed spectral reference libraries is the basis for accurate and complete stone analysis. The literature included in this manuscript identify errors in most commercial laboratories and in some academic centers. We provide personal comments on why such errors are occurring at such high rates, and although the work load is rather large, it is very worthwhile in providing accurate stone compositions. We also provide the results of our almost 90,000 stone analyses and a breakdown of the number of components we have observed in the various stones. We also offer advice on determining the method used by the various FTIR equipment manufacturers who also provide a stone analysis library so that the FTIR users can feel comfortable in the accuracy of their reported results. Such an analysis on the accuracy of the individual reference libraries could positively influence the reduction in their respective error rates.

  19. Accurate estimation of cardinal growth temperatures of Escherichia coli from optimal dynamic experiments.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2008-11-30

    Prediction of the microbial growth rate as a response to changing temperatures is an important aspect in the control of food safety and food spoilage. Accurate model predictions of the microbial evolution ask for correct model structures and reliable parameter values with good statistical quality. Given the widely accepted validity of the Cardinal Temperature Model with Inflection (CTMI) [Rosso, L., Lobry, J. R., Bajard, S. and Flandrois, J. P., 1995. Convenient model to describe the combined effects of temperature and pH on microbial growth, Applied and Environmental Microbiology, 61: 610-616], this paper focuses on the accurate estimation of its four parameters (T(min), T(opt), T(max) and micro(opt)) by applying the technique of optimal experiment design for parameter estimation (OED/PE). This secondary model describes the influence of temperature on the microbial specific growth rate from the minimum to the maximum temperature for growth. Dynamic temperature profiles are optimized within two temperature regions ([15 degrees C, 43 degrees C] and [15 degrees C, 45 degrees C]), focusing on the minimization of the parameter estimation (co)variance (D-optimal design). The optimal temperature profiles are implemented in a computer controlled bioreactor, and the CTMI parameters are identified from the resulting experimental data. Approximately equal CTMI parameter values were derived irrespective of the temperature region, except for T(max). The latter could only be estimated accurately from the optimal experiments within [15 degrees C, 45 degrees C]. This observation underlines the importance of selecting the upper temperature constraint for OED/PE as close as possible to the true T(max). Cardinal temperature estimates resulting from designs within [15 degrees C, 45 degrees C] correspond with values found in literature, are characterized by a small uncertainty error and yield a good result during validation. As compared to estimates from non-optimized dynamic

  20. Describing linguistic information in a behavioural framework: Possible or not?

    SciTech Connect

    De Cooman, G.

    1996-12-31

    The paper discusses important aspects of the representation of linguistic information, using imprecise probabilities with a behavioural interpretation. We define linguistic information as the information conveyed by statements in natural language, but restrict ourselves to simple affirmative statements of the type {open_quote}subject-is-predicate{close_quote}. Taking the behavioural stance, as it is described in detail, we investigate whether it is possible to give a mathematical model for this kind of information. In particular, we evaluate Zadeli`s suggestion that we should use possibility measures to this end. We come to tile conclusion that, generally speaking, possibility measures are possibility models for linguistic information, but that more work should be done in order to evaluate the suggestion that they may be the only ones.

  1. Bacteremia by Streptobacillus moniliformis: first case described in Spain.

    PubMed

    Torres, L; López, A I; Escobar, S; Marne, C; Marco, M L; Pérez, M; Verhaegen, J

    2003-04-01

    Described here is the case of an 87-year-old man who developed fever, chills and discomfort caused by Streptobacillus moniliformis. This pathogen is one of the causes of rat-bite fever, an uncommon bacterial illness transmitted through a bite or scratch from a rodent or the ingestion of food or water contaminated with rat faeces. Cases of rat-bite fever are rarely reported in Spain. The patient reported no history of rat bite or rodent contact, and the only known risk factor was contact with a dog and a cat that were kept as pets. Streptobacillus moniliformis was isolated in two sets of blood cultures. This case represents what is believed to be the first report of bacteremia due to Streptobacillus moniliformis in Spain.

  2. A broadly applicable function for describing luminescence dose response

    SciTech Connect

    Burbidge, C. I.

    2015-07-28

    The basic form of luminescence dose response is investigated, with the aim of developing a single function to account for the appearance of linear, superlinear, sublinear, and supralinear behaviors and variations in saturation signal level and rate. A function is assembled based on the assumption of first order behavior in different major factors contributing to measured luminescence-dosimetric signals. Different versions of the function are developed for standardized and non-dose-normalized responses. Data generated using a two trap two recombination center model and experimental data for natural quartz are analyzed to compare results obtained using different signals, measurement protocols, pretreatment conditions, and radiation qualities. The function well describes a range of dose dependent behavior, including sublinear, superlinear, supralinear, and non-monotonic responses and relative response to α and β radiation, based on change in relative recombination and trapping probability affecting signals sourced from a single electron trap.

  3. Method to describe stochastic dynamics using an optimal coordinate.

    PubMed

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function.

  4. A framework for describing health care delivery organizations and systems.

    PubMed

    Piña, Ileana L; Cohen, Perry D; Larson, David B; Marion, Lucy N; Sills, Marion R; Solberg, Leif I; Zerzan, Judy

    2015-04-01

    Describing, evaluating, and conducting research on the questions raised by comparative effectiveness research and characterizing care delivery organizations of all kinds, from independent individual provider units to large integrated health systems, has become imperative. Recognizing this challenge, the Delivery Systems Committee, a subgroup of the Agency for Healthcare Research and Quality's Effective Health Care Stakeholders Group, which represents a wide diversity of perspectives on health care, created a draft framework with domains and elements that may be useful in characterizing various sizes and types of care delivery organizations and may contribute to key outcomes of interest. The framework may serve as the door to further studies in areas in which clear definitions and descriptions are lacking.

  5. Concepts and methods for describing critical phenomena in fluids

    NASA Technical Reports Server (NTRS)

    Sengers, J. V.; Sengers, J. M. H. L.

    1977-01-01

    The predictions of theoretical models for a critical-point phase transistion in fluids, namely the classical equation with third-degree critical isotherm, that with fifth-degree critical isotherm, and the lattice gas, are reviewed. The renormalization group theory of critical phenomena and the hypothesis of universality of critical behavior supported by this theory are discussed as well as the nature of gravity effects and how they affect cricital-region experimentation in fluids. The behavior of the thermodynamic properties and the correlation function is formulated in terms of scaling laws. The predictions of these scaling laws and of the hypothesis of universality of critical behavior are compared with experimental data for one-component fluids and it is indicated how the methods can be extended to describe critical phenomena in fluid mixtures.

  6. Principal spectra describing magnetooptic permittivity tensor in cubic crystals

    NASA Astrophysics Data System (ADS)

    Hamrlová, Jana; Legut, Dominik; Veis, Martin; Pištora, Jaromír; Hamrle, Jaroslav

    2016-12-01

    We provide unified phenomenological description of magnetooptic effects being linear and quadratic in magnetization. The description is based on few principal spectra, describing elements of permittivity tensor up to the second order in magnetization. Each permittivity tensor element for any magnetization direction and any sample surface orientation is simply determined by weighted summation of the principal spectra, where weights are given by crystallographic and magnetization orientations. The number of principal spectra depends on the symmetry of the crystal. In cubic crystals owning point symmetry we need only four principal spectra. Here, the principal spectra are expressed by ab initio calculations for bcc Fe, fcc Co and fcc Ni in optical range as well as in hard and soft x-ray energy range, i.e. at the 2p- and 3p-edges. We also express principal spectra analytically using modified Kubo formula.

  7. Angular momentum and torque described with the complex octonion

    SciTech Connect

    Weng, Zi-Hua

    2014-08-15

    The paper aims to adopt the complex octonion to formulate the angular momentum, torque, and force etc in the electromagnetic and gravitational fields. Applying the octonionic representation enables one single definition of angular momentum (or torque, force) to combine some physics contents, which were considered to be independent of each other in the past. J. C. Maxwell used simultaneously two methods, the vector terminology and quaternion analysis, to depict the electromagnetic theory. It motivates the paper to introduce the quaternion space into the field theory, describing the physical feature of electromagnetic and gravitational fields. The spaces of electromagnetic field and of gravitational field can be chosen as the quaternion spaces, while the coordinate component of quaternion space is able to be the complex number. The quaternion space of electromagnetic field is independent of that of gravitational field. These two quaternion spaces may compose one octonion space. Contrarily, one octonion space can be separated into two subspaces, the quaternion space and S-quaternion space. In the quaternion space, it is able to infer the field potential, field strength, field source, angular momentum, torque, and force etc in the gravitational field. In the S-quaternion space, it is capable of deducing the field potential, field strength, field source, current continuity equation, and electric (or magnetic) dipolar moment etc in the electromagnetic field. The results reveal that the quaternion space is appropriate to describe the gravitational features, including the torque, force, and mass continuity equation etc. The S-quaternion space is proper to depict the electromagnetic features, including the dipolar moment and current continuity equation etc. In case the field strength is weak enough, the force and the continuity equation etc can be respectively reduced to that in the classical field theory.

  8. Using Scaling for accurate stochastic macroweather forecasts (including the "pause")

    NASA Astrophysics Data System (ADS)

    Lovejoy, Shaun; del Rio Amador, Lenin

    2015-04-01

    At scales corresponding to the lifetimes of structures of planetary extent (about 5 - 10 days), atmospheric processes undergo a drastic "dimensional transition" from high frequency weather to lower frequency macroweather processes. While conventional GCM's generally well reproduce both the transition and the corresponding (scaling) statistics, due to their sensitive dependence on initial conditions, the role of the weather scale processes is to provide random perturbations to the macroweather processes. The main problem with GCM's is thus that their long term (control run, unforced) statistics converge to the GCM climate and this is somewhat different from the real climate. This is the motivation for using a stochastic model and exploiting the empirical scaling properties and past data to make a stochastic model. It turns out that macroweather intermittency is typically low (the multifractal corrections are small) so that they can be approximated by fractional Gaussian Noise (fGN) processes whose memory can be enormous. For example for annual forecasts, and using the observed global temperature exponent, even 50 years of global temperature data would only allow us to exploit 90% of the available memory (for ocean regions, the figure increases to 600 years). The only complication is that anthropogenic effects dominate the global statistics at time scales beyond about 20 years. However, these are easy to remove using the CO2 forcing as a linear surrogate for all the anthropogenic effects. Using this theoretical framework, we show how to make accurate stochastic macroweather forecasts. We illustrate this on monthly and annual scale series of global and northern hemisphere surface temperatures (including nearly perfect hindcasts of the "pause" in the warming since 1998). We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow. These scaling hindcasts - using a single effective climate sensitivity and single scaling exponent are

  9. Accurate rubidium atomic fountain frequency standard

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Yuri; Marra, Giuseppe

    2011-06-01

    The design, operating parameters and the accuracy evaluation of the NPL Rb atomic fountain are described. The atomic fountain employs a double magneto-optical arrangement that allows a large number of 87Rb atoms to be trapped, a water-cooled temperature-stabilized interrogation region and a high quality factor interrogation cavity. From the uncertainties of measured and calculated systematic frequency shifts, the fractional frequency accuracy is estimated to be 3.7 × 10-16. The fractional frequency stability, limited predominantly by noise in the local oscillator, is measured to be 7 × 10-16 after one day of averaging. Based on the proposed quasi-continuous regime of operation of the fountain, the accuracy of the Rb standard of 5 × 10-17 reachable in two days of averaging is predicted.

  10. Identifying parameters to describe local land-atmosphere coupling

    NASA Astrophysics Data System (ADS)

    Ek, M. B.; Jacobs, C. M.; Santanello, J. A.; Tuinenburg, O.

    2009-12-01

    The Global Energy and Water Cycle Experiment (GEWEX) Land-Atmosphere System Study / Local Coupling (GLASS/LoCo) project seeks to understand the role of local land-atmosphere coupling in the evolution of surface fluxes and boundary layer state variables including clouds. The theme of land-atmosphere interaction is a research area that is rapidly developing; after the well-known GLACE experiments and various diagnostic studies, new research has evolved in modeling and observing the degree of land-atmosphere coupling on local scales. Questions of interest are (1) how much is coupling related to local versus "remote" processes, (2) what is the nature and strength of coupling, and (3) how does this change (e.g. for different temporal and spatial scales, geographic regions, and changing climates). As such, this is an important issue on both weather and climate time scales. The GLASS/LoCo working group is investigating diagnostics to quantify land-atmosphere coupling. Coupling parameters include the roles of soil moisture and surface evaporative fraction as well as the evolving atmospheric boundary layer and boundary-layer entrainment. After suitable diagnostic parameters are identified, observational data and output from weather and climate models will be used to "map" land-atmosphere coupling in regards to (1)-(3) above.

  11. How well is our Universe described by an FLRW model?

    NASA Astrophysics Data System (ADS)

    Green, Stephen R.; Wald, Robert M.

    2014-12-01

    Extremely well! In the ΛCDM model, the spacetime metric, gab, of our Universe is approximated by an FLRW metric, gab(0), to about one part in 104 or better on both large and small scales, except in the immediate vicinity of very strong field objects, such as black holes. However, derivatives of gab are not close to derivatives of gab(0), so there can be significant differences in the behavior of geodesics and huge differences in curvature. Consequently, observable quantities in the actual Universe may differ significantly from the corresponding observables in the FLRW model. Nevertheless, as we shall review here, we have proven general results showing that—within the framework of our approach to treating backreaction—the large matter inhomogeneities that occur on small scales cannot produce significant effects on large scales, so gab(0) satisfies Einstein's equation with the averaged stress-energy tensor of matter as its source. We discuss the flaws in some other approaches that have suggested that large backreaction effects may occur. As we also will review here, with a suitable ‘dictionary,’ Newtonian cosmologies provide excellent approximations to cosmological solutions to Einstein's equation (with dust and a cosmological constant) on all scales. Our results thereby provide strong justification for the mathematical consistency and validity of the ΛCDM model within the context of general relativistic cosmology.

  12. Beyond Rainfall Multipliers: Describing Input Uncertainty as an Autocorrelated Stochastic Process Improves Inference in Hydrology

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Albert, C.; Reichert, P.; Rieckermann, J.

    2015-12-01

    Rainfall is the main driver of hydrological systems. Unfortunately, it is highly variable in space and time and therefore difficult to observe accurately. This poses a serious challenge to correctly estimate the catchment-averaged precipitation, a key factor for hydrological models. As biased precipitation leads to biased parameter estimation and thus to biased runoff predictions, it is very important to have a realistic description of precipitation uncertainty. Rainfall multipliers (RM), which correct each observed storm with a random factor, provide a first step into this direction. Nevertheless, they often fail when the estimated input has a different temporal pattern from the true one or when a storm is not detected by the raingauge. In this study we propose a more realistic input error model, which is able to overcome these challenges and increase our certainty by better estimating model input and parameters. We formulate the average precipitation over the watershed as a stochastic input process (SIP). We suggest a transformed Gauss-Markov process, which is estimated in a Bayesian framework by using input (rainfall) and output (runoff) data. We tested the methodology in a 28.6 ha urban catchment represented by an accurate conceptual model. Specifically, we perform calibration and predictions with SIP and RM using accurate data from nearby raingauges (R1) and inaccurate data from a distant gauge (R2). Results show that using SIP, the estimated model parameters are "protected" from the corrupting impact of inaccurate rainfall. Additionally, SIP can correct input biases during calibration (Figure) and reliably quantify rainfall and runoff uncertainties during both calibration (Figure) and validation. In our real-word application with non-trivial rainfall errors, this was not the case with RM. We therefore recommend SIP in all cases where the input is the predominant source of uncertainty. Furthermore, the high-resolution rainfall intensities obtained with this

  13. Acquisition of accurate data from intramolecular quenched fluorescence protease assays.

    PubMed

    Arachea, Buenafe T; Wiener, Michael C

    2017-04-01

    The Intramolecular Quenched Fluorescence (IQF) protease assay utilizes peptide substrates containing donor-quencher pairs that flank the scissile bond. Following protease cleavage, the dequenched donor emission of the product is subsequently measured. Inspection of the IQF literature indicates that rigorous treatment of systematic errors in observed fluorescence arising from inner-filter absorbance (IF) and non-specific intermolecular quenching (NSQ) is incompletely performed. As substrate and product concentrations vary during the time-course of enzyme activity, iterative solution of the kinetic rate equations is, generally, required to obtain the proper time-dependent correction to the initial velocity fluorescence data. Here, we demonstrate that, if the IQF assay is performed under conditions where IF and NSQ are approximately constant during the measurement of initial velocity for a given initial substrate concentration, then a simple correction as a function of initial substrate concentration can be derived and utilized to obtain accurate initial velocity data for analysis.

  14. Accurate determination of serum ASAT isoenzymes.

    PubMed

    Konttinen, A; Ojala, K

    1978-01-01

    An improved electrophoretic modification for measuring aspartate aminotransferase (ASAT) isoenzymes is presented. This method fulfils the clinical requirements for sensitivity and allows the detection of 1 U/l mitochondria ASAT activity at 25 degree C. The procedure is relatively simple, requiring about one hour for a series of 8 determinations. Mitochondrial ASAT activity was found in all patients suffering from acute myocardial infarction pathological activity was observed for several days longer than that of total serum ASAT enzyme. None of the 25 healthy people studied had mitochondrial ASAT in their serum.

  15. Conceptual hierarchical modeling to describe wetland plant community organization

    USGS Publications Warehouse

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  16. Conceptual framework describing a child's total (built, natural ...

    EPA Pesticide Factsheets

    The complexity of the components and their interactions that characterize children’s health and well-being are not adequately captured by current public health paradigms. Children are exposed to combinations of chemical and non-chemical stressors from their built, natural, and social environments at each lifestage and throughout their lifecourse. Children’s inherent characteristics (e.g., sex, genetics, pre-existing disease) and their activities and behaviors also influence their exposures to chemical and non-chemical stressors from these environments. We describe a conceptual framework that considers the interrelationships between inherent characteristics, activities and behaviors, and stressors (both chemical and non-chemical) from the built, natural, and social environments in influencing children’s health and well-being throughout their lifecourse. This framework is comprised of several intersecting circles that represent how stressors from the total environment interact with children’s inherent characteristics and their activities and behaviors to influence their health and well-being at each lifestage and throughout their lifecourse. We used this framework to examine the complex interrelationships between chemical and non-chemical stressors for two public health challenges specific to children: childhood obesity and general cognitive ability. One systematic scoping review showed that children’s general cognitive ability was influenced not only by

  17. The complexity of organizational change: describing communication during organizational turbulence.

    PubMed

    Salem, Philip

    2013-01-01

    Organizational researchers and practitioners have been interested in organizational change for some time. Historically, they have directed most of their efforts at improving the efficiency of planned top-down change. These efforts were strategic attempts at altering parameters leading to transformational change. Most efforts failed to meet their intended purposes. Transformational organizational change has not been likely. The legitimate systems have been robust. There has been little systematic investigation of the communication occurring during these efforts. The purpose of this essay is to describe results of a mixed methods research project answering two research questions. (a) How do organizational members communicate during a time of turbulence? (b) What features of this communication suggest the potential for or resistance to transformative change? Comparing the results at the beginning of the period to other periods, gives insight into how social actors communicate and enact the organization during a threshold period where transformational change was possible. Results reveal identifiable patterns of communication as communication strategies, parameters, or basins of attraction. The overall pattern explains how micro communication patterns intersect and how the accumulation of these patterns may resist or accomplish change at a macro level.

  18. Jan Evangelista Purkynje (1787-1869): first to describe fingerprints.

    PubMed

    Grzybowski, Andrzej; Pietrzak, Krzysztof

    2015-01-01

    Fingerprints have been used for years as the accepted tool in criminology and for identification. The first system of classification of fingerprints was introduced by Jan Evangelista Purkynje (1787-1869), a Czech physiologist, in 1823. He divided the papillary lines into nine types, based on their geometric arrangement. This work, however, was not recognized internationally for many years. In 1858, Sir William Herschel (1833-1917) registered fingerprints for those signing documents at the Indian magistrate's office in Jungipoor. Henry Faulds (1843-1930) in 1880 proposed using ink for fingerprint determination and people identification, and Francis Galton (1822-1911) collected 8000 fingerprints and developed their classification based on the spirals, loops, and arches. In 1892, Juan Vucetich (1858-1925) created his own fingerprint identification system and proved that a woman was responsible for killing two of her sons. In 1896, a London police officer Edward Henry (1850-1931) expanded on earlier systems of classification and used papillary lines to identify criminals; it was his system that was adopted by the forensic world. The work of Jan Evangelista Purkynje (1787-1869) (Figure 1), who in 1823 was the first to describe in detail fingerprints, is almost forgotten. He also established their classification. The year 2013 marked the 190th anniversary of the publication of his work on this topic. Our contribution is an attempt to introduce the reader to this scientist and his discoveries in the field of fingerprint identification.

  19. Search for an Average Potential describing Transfer Reactions

    NASA Astrophysics Data System (ADS)

    Suehiro, Teruo; Nakagawa, Takemi

    2001-10-01

    Variety of attempts such as coupled channels, non-locality corrections of optical potentials, projectile breakup etc. were made to resolve discrepancies between the distorted-wave Born approximation (DWBA) calculations and experimental differential cross section data of the transfer reactions initiated by light ions. The present work assumes that these discrepancies basically reflect detailed structure of the average interaction exerting on the nucleons involved in the transfer. Computations were carried out searching a potential that successfully describe both transfer reactions and the ordering and energies of neutron shells in the relevant nuclei. The (p,d) reactions on ^54,56Fe and ^58Ni at 40 and 50 MeV were taken for example, for which experimental data exist with good statistics in wider angular range. The potential was simulated by a sum of the volume and the derivative Wood-Saxon potential with seven free parameters. Finite-range DWBA calculations were done with the code DWUCK5(We are much indebted to Prof. P. D. Kunz for providing us with a PC version of the code DWUCK5, without which this work was impossible.). One set of such interaction potential was obtained which is markedly different from the volume Wood-Saxon potential customary used in the previous calculations. Implications of this potential will be discussed with regard to matter distributions of nuclei.

  20. General Method for Describing Three-Dimensional Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Titov, Viacheslav; Forbes, Terry; Priest, Eric; Mikic, Zoran; Linker, Jon

    2009-11-01

    A general method for describing magnetic reconnection in arbitrary three-dimensional magnetic configurations is proposed. The method is based on the field-line mapping technique previously used only for the analysis of magnetic structure at a given time. This technique is extended here so as to analyze the evolution of magnetic structure. Such a generalization is made with the help of new dimensionless quantities called ``slip-squashing factors''. Their large values define the surfaces that border the reconnected or to-be-reconnected magnetic flux tubes for a given period of time during the magnetic evolution. The proposed method is universal, since it assumes only that the time sequence of evolving magnetic field and the tangential boundary flows are known. We illustrate our method for several examples and compare it with the general magnetic reconnection theory, proposed previously by Hesse and coworkers. The new method admits a straightforward numerical implementation and provides a powerful tool for the diagnostics of numerical data obtained in theoretical or experimental studies of magnetic reconnection in space and laboratory plasmas.

  1. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  2. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.

  3. IDSite: An accurate approach to predict P450-mediated drug metabolism

    PubMed Central

    Li, Jianing; Schneebeli, Severin T.; Bylund, Joseph; Farid, Ramy; Friesner, Richard A.

    2011-01-01

    Accurate prediction of drug metabolism is crucial for drug design. Since a large majority of drugs metabolism involves P450 enzymes, we herein describe a computational approach, IDSite, to predict P450-mediated drug metabolism. To model induced-fit effects, IDSite samples the conformational space with flexible docking in Glide followed by two refinement stages using the Protein Local Optimization Program (PLOP). Sites of metabolism (SOMs) are predicted according to a physical-based score that evaluates the potential of atoms to react with the catalytic iron center. As a preliminary test, we present in this paper the prediction of hydroxylation and O-dealkylation sites mediated by CYP2D6 using two different models: a physical-based simulation model, and a modification of this model in which a small number of parameters are fit to a training set. Without fitting any parameters to experimental data, the Physical IDSite scoring recovers 83% of the experimental observations for 56 compounds with a very low false positive rate. With only 4 fitted parameters, the Fitted IDSite was trained with the subset of 36 compounds and successfully applied to the other 20 compounds, recovering 94% of the experimental observations with high sensitivity and specificity for both sets. PMID:22247702

  4. A simple polymeric model describes cell nuclear mechanical response

    NASA Astrophysics Data System (ADS)

    Banigan, Edward; Stephens, Andrew; Marko, John

    The cell nucleus must continually resist inter- and intracellular mechanical forces, and proper mechanical response is essential to basic cell biological functions as diverse as migration, differentiation, and gene regulation. Experiments probing nuclear mechanics reveal that the nucleus stiffens under strain, leading to two characteristic regimes of force response. This behavior depends sensitively on the intermediate filament protein lamin A, which comprises the outer layer of the nucleus, and the properties of the chromatin interior. To understand these mechanics, we study a simulation model of a polymeric shell encapsulating a semiflexible polymer. This minimalistic model qualitatively captures the typical experimental nuclear force-extension relation and observed nuclear morphologies. Using a Flory-like theory, we explain the simulation results and mathematically estimate the force-extension relation. The model and experiments suggest that chromatin organization is a dominant contributor to nuclear mechanics, while the lamina protects cell nuclei from large deformations.

  5. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    USGS Publications Warehouse

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  6. Describing rainfall in northern Australia using multiple climate indices

    NASA Astrophysics Data System (ADS)

    Wilks Rogers, Cassandra Denise; Beringer, Jason

    2017-02-01

    Savanna landscapes are globally extensive and highly sensitive to climate change, yet the physical processes and climate phenomena which affect them remain poorly understood and therefore poorly represented in climate models. Both human populations and natural ecosystems are highly susceptible to precipitation variation in these regions due to the effects on water and food availability and atmosphere-biosphere energy fluxes. Here we quantify the relationship between climate phenomena and historical rainfall variability in Australian savannas and, in particular, how these relationships changed across a strong rainfall gradient, namely the North Australian Tropical Transect (NATT). Climate phenomena were described by 16 relevant climate indices and correlated against precipitation from 1900 to 2010 to determine the relative importance of each climate index on seasonal, annual and decadal timescales. Precipitation trends, climate index trends and wet season characteristics have also been investigated using linear statistical methods. In general, climate index-rainfall correlations were stronger in the north of the NATT where annual rainfall variability was lower and a high proportion of rainfall fell during the wet season. This is consistent with a decreased influence of the Indian-Australian monsoon from the north to the south. Seasonal variation was most strongly correlated with the Australian Monsoon Index, whereas yearly variability was related to a greater number of climate indices, predominately the Tasman Sea and Indonesian sea surface temperature indices (both of which experienced a linear increase over the duration of the study) and the El Niño-Southern Oscillation indices. These findings highlight the importance of understanding the climatic processes driving variability and, subsequently, the importance of understanding the relationships between rainfall and climatic phenomena in the Northern Territory in order to project future rainfall patterns in the

  7. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...

  8. A unique, accurate LWIR optics measurement system

    NASA Astrophysics Data System (ADS)

    Fantone, Stephen D.; Orband, Daniel G.

    2011-05-01

    A compact low-cost LWIR test station has been developed that provides real time MTF testing of IR optical systems and EO imaging systems. The test station is intended to be operated by a technician and can be used to measure the focal length, blur spot size, distortion, and other metrics of system performance. The challenges and tradeoffs incorporated into this instrumentation will be presented. The test station performs the measurement of an IR lens or optical system's first order quantities (focal length, back focal length) including on and off-axis imaging performance (e.g., MTF, resolution, spot size) under actual test conditions to enable the simulation of their actual use. Also described is the method of attaining the needed accuracies so that derived calculations like focal length (EFL = image shift/tan(theta)) can be performed to the requisite accuracy. The station incorporates a patented video capture technology and measures MTF and blur characteristics using newly available lowcost LWIR cameras. This allows real time determination of the optical system performance enabling faster measurements, higher throughput and lower cost results than scanning systems. Multiple spectral filters are also accommodated within the test stations which facilitate performance evaluation under various spectral conditions.

  9. Accurate spectral numerical schemes for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon; Cerfon, Antoine J.; Landreman, Matt

    2015-08-01

    We examine the merits of using a family of polynomials that are orthogonal with respect to a non-classical weight function to discretize the speed variable in continuum kinetic calculations. We consider a model one-dimensional partial differential equation describing energy diffusion in velocity space due to Fokker-Planck collisions. This relatively simple case allows us to compare the results of the projected dynamics with an expensive but highly accurate spectral transform approach. It also allows us to integrate in time exactly, and to focus entirely on the effectiveness of the discretization of the speed variable. We show that for a fixed number of modes or grid points, the non-classical polynomials can be many orders of magnitude more accurate than classical Hermite polynomials or finite-difference solvers for kinetic equations in plasma physics. We provide a detailed analysis of the difference in behavior and accuracy of the two families of polynomials. For the non-classical polynomials, if the initial condition is not smooth at the origin when interpreted as a three-dimensional radial function, the exact solution leaves the polynomial subspace for a time, but returns (up to roundoff accuracy) to the same point evolved to by the projected dynamics in that time. By contrast, using classical polynomials, the exact solution differs significantly from the projected dynamics solution when it returns to the subspace. We also explore the connection between eigenfunctions of the projected evolution operator and (non-normalizable) eigenfunctions of the full evolution operator, as well as the effect of truncating the computational domain.

  10. Expressive writing difficulties in children described as exhibiting ADHD symptoms.

    PubMed

    Re, Anna Maria; Pedron, Martina; Cornoldi, Cesare

    2007-01-01

    Three groups of children of different ages who were considered by their teachers as showing symptoms of attention-deficit/hyperactivity disorder (ADHD) and matched controls were tested in a series of expressive writing tasks, derived from a standardized writing test. In the first study, 24 sixth- and seventh-grade children with ADHD symptoms wrote a description of an image. The ADHD group's expressive writing was worse than that of the control group and associated with a higher number of errors, mainly concerning accents and geminates. The second study showed the generality of the effect by testing younger groups of children with ADHD symptoms and controls with another description task where a verbal description was substituted for the picture stimulus. The third study extended the previous observations with another type of writing task, the request of writing a narrative text. In all the three studies, children with ADHD symptoms scored lower than controls on four qualitative parameters (adequacy, structure, grammar, and lexicon), produced shorter texts, and made more errors. These studies show that children with ADHD symptoms have school difficulties also in writing-both in spelling and expression-and that these difficulties are extended to different tasks and ages.

  11. A Biophysical Neural Model To Describe Spatial Visual Attention

    NASA Astrophysics Data System (ADS)

    Hugues, Etienne; José, Jorge V.

    2008-02-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.

  12. Noise reduction for modal parameters estimation using algorithm of solving partially described inverse singular value problem

    NASA Astrophysics Data System (ADS)

    Bao, Xingxian; Cao, Aixia; Zhang, Jing

    2016-07-01

    Modal parameters estimation plays an important role for structural health monitoring. Accurately estimating the modal parameters of structures is more challenging as the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of solving the partially described inverse singular value problem (PDISVP) combined with the complex exponential (CE) method to estimate the modal parameters. The PDISVP solving method is to reconstruct an L2-norm optimized (filtered) data matrix from the measured (noisy) data matrix, when the prescribed data constraints are one or several sets of singular triplets of the matrix. The measured data matrix is Hankel structured, which is constructed based on the measured impulse response function (IRF). The reconstructed matrix must maintain the Hankel structure, and be lowered in rank as well. Once the filtered IRF is obtained, the CE method can be applied to extract the modal parameters. Two physical experiments, including a steel cantilever beam with 10 accelerometers mounted, and a steel plate with 30 accelerometers mounted, excited by an impulsive load, respectively, are investigated to test the applicability of the proposed scheme. In addition, the consistency diagram is proposed to exam the agreement among the modal parameters estimated from those different accelerometers. Results indicate that the PDISVP-CE method can significantly remove noise from measured signals and accurately estimate the modal frequencies and damping ratios.

  13. Nightcap measurement of sleep quality in self-described good and poor sleepers.

    PubMed

    Pace-Schott, E F; Kaji, J; Stickgold, R; Hobson, J A

    1994-12-01

    The Nightcap is a home-based sleep monitoring device that reliably differentiates rapid eye movement sleep, nonrapid eye movement sleep and wake states using eyelid and body movement measurements. This study documents its capacity to measure differences in sleep latency and sleep efficiency between self-described good and poor sleepers drawn from a normal population. Ten self-described "good" sleepers and 11 self-described "poor" sleepers were selected from a pool of college students. These groups differed significantly on selection parameters and on subjective estimates of sleep quality obtained each morning during the study. Each subject wore the Nightcap at home for 12-17 nights. Statistically significant differences in Nightcap-measured sleep latency and sleep efficiency were obtained between groups using individual subject means. In individual subjects, Nightcap measurements of sleep latency were correlated with subjective estimates of sleep latency. Poor sleepers were less accurate in estimating their sleep onset latency than were good sleepers. The demonstrated sensitivity of the Nightcap to good and poor sleep in these normal subjects augurs well for its application in a clinical setting.

  14. Scattering and diffraction described using the momentum representation.

    PubMed

    Wennerström, Håkan

    2014-03-01

    We present a unified analysis of the scattering and diffraction of neutrons and photons using momentum representation in a full quantum description. The scattering event is consistently seen as a transfer of momentum between the target and the probing particles. For an elastic scattering process the observed scattering pattern primarily provides information on the momentum distribution for the particles in the target that cause the scattering. Structural information then follows from the Fourier transform relation between momentum and positional state functions. This description is common to the scattering of neutrons, X-ray photons and photons of light. In the quantum description of the interaction between light and the electrons of the target the scattering of X-rays is dominated by the first order contribution from the vector potential squared. The interaction with the electron is local and there is a close analogy, evident from the explicit quantitative expressions, with the neutron scattering case where the nucleus-neutron interaction is fully local from a molecular perspective. For light scattering, on the other hand, the dominant contribution to the scattering comes from a second order term linear in the vector potential. Thus the scattering of light involves correlations between electrons at different positions giving a conceptual explanation of the qualitative difference between the scattering of high and low energy photons. However, at energies close to resonance conditions the scattering of high energy photons is also affected by the second order term which results in a so called anomalous X-ray scattering/diffraction. It is also shown that using the momentum representation the phenomenon of diffraction is a direct consequence of the fact that for a system with periodic symmetry like a crystal the momentum distribution is quantized, which follows from Bloch's theorem. The momentum transfer to a probing particle is then also quantized resulting in a

  15. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  16. Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure

    PubMed Central

    Skvortsova, Elena B.; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification

  17. Universal spatial correlation functions for describing and reconstructing soil microstructure.

    PubMed

    Karsanina, Marina V; Gerke, Kirill M; Skvortsova, Elena B; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification

  18. Identifying and Describing Tutor Archetypes: The Pragmatist, the Architect, and the Surveyor

    ERIC Educational Resources Information Center

    Harootunian, Jeff A.; Quinn, Robert J.

    2008-01-01

    In this article, the authors identify and anecdotally describe three tutor archetypes: the pragmatist, the architect, and the surveyor. These descriptions, based on observations of remedial mathematics tutors at a land-grant university, shed light on a variety of philosophical beliefs regarding and pedagogical approaches to tutoring. An analysis…

  19. Finite volume approach for the instationary Cosserat rod model describing the spinning of viscous jets

    NASA Astrophysics Data System (ADS)

    Arne, Walter; Marheineke, Nicole; Meister, Andreas; Schiessl, Stefan; Wegener, Raimund

    2015-08-01

    The spinning of slender viscous jets can be asymptotically described by one-dimensional models that consist of systems of partial and ordinary differential equations. Whereas well-established string models only possess solutions for certain choices of parameters and configurations, the more sophisticated rod model is not limited by restrictions. It can be considered as an ɛ-regularized string model, but containing the slenderness ratio ɛ in the equations complicates its numerical treatment. We develop numerical schemes for fixed or enlarging (time-dependent) domains, using a finite volume approach in space with mixed central, up- and down-winded differences and stiffly accurate Radau methods for the time integration. For the first time, results of instationary simulations for a fixed or growing jet in a rotational spinning process are presented for arbitrary parameter ranges.

  20. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  1. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    PubMed Central

    2011-01-01

    Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645

  2. A General Pairwise Interaction Model Provides an Accurate Description of In Vivo Transcription Factor Binding Sites

    PubMed Central

    Santolini, Marc; Mora, Thierry; Hakim, Vincent

    2014-01-01

    The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond

  3. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  4. On the accurate simulation of tsunami wave propagation

    NASA Astrophysics Data System (ADS)

    Castro, C. E.; Käser, M.; Toro, E. F.

    2009-04-01

    A very important part of any tsunami early warning system is the numerical simulation of the wave propagation in the open sea and close to geometrically complex coastlines respecting bathymetric variations. Here we are interested in improving the numerical tools available to accurately simulate tsunami wave propagation on a Mediterranean basin scale. To this end, we need to accomplish some targets, such as: high-order numerical simulation in space and time, preserve steady state conditions to avoid spurious oscillations and describe complex geometries due to bathymetry and coastlines. We use the Arbitrary accuracy DERivatives Riemann problem method together with Finite Volume method (ADER-FV) over non-structured triangular meshes. The novelty of this method is the improvement of the ADER-FV scheme, introducing the well-balanced property when geometrical sources are considered for unstructured meshes and arbitrary high-order accuracy. In a previous work from Castro and Toro [1], the authors mention that ADER-FV schemes approach asymptotically the well-balanced condition, which was true for the test case mentioned in [1]. However, new evidence[2] shows that for real scale problems as the Mediterranean basin, and considering realistic bathymetry as ETOPO-2[3], this asymptotic behavior is not enough. Under these realistic conditions the standard ADER-FV scheme fails to accurately describe the propagation of gravity waves without being contaminated with spurious oscillations, also known as numerical waves. The main problem here is that at discrete level, i.e. from a numerical point of view, the numerical scheme does not correctly balance the influence of the fluxes and the sources. Numerical schemes that retain this balance are said to satisfy the well-balanced property or the exact C-property. This unbalance reduces, as we refine the spatial discretization or increase the order of the numerical method. However, the computational cost increases considerably this way

  5. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  6. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  7. The Laboratory Parenting Assessment Battery: Development and Preliminary Validation of an Observational Parenting Rating System

    ERIC Educational Resources Information Center

    Wilson, Sylia; Durbin, C. Emily

    2012-01-01

    Investigations of contributors to and consequences of the parent-child relationship require accurate assessment of the nature and quality of parenting. The present study describes the development and psychometric evaluation of the Laboratory Parenting Assessment Battery (Lab-PAB), an observational rating system that assesses parenting behaviors…

  8. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  9. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    SciTech Connect

    Perley, R. A.; Butler, B. J. E-mail: BButler@nrao.edu

    2013-02-15

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  10. Very Fast and Accurate Azimuth Disambiguation of Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Rudenko, G. V.; Anfinogentov, S. A.

    2014-05-01

    We present a method for fast and accurate azimuth disambiguation of vector magnetogram data regardless of the location of the analyzed region on the solar disk. The direction of the transverse field is determined with the principle of minimum deviation of the field from the reference (potential) field. The new disambiguation (NDA) code is examined on the well-known models of Metcalf et al. ( Solar Phys. 237, 267, 2006) and Leka et al. ( Solar Phys. 260, 83, 2009), and on an artificial model based on the observed magnetic field of AR 10930 (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We compare Hinode/SOT-SP vector magnetograms of AR 10930 disambiguated with three codes: the NDA code, the nonpotential magnetic-field calculation (NPFC: Georgoulis, Astrophys. J. Lett. 629, L69, 2005), and the spherical minimum-energy method (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We then illustrate the performance of NDA on SDO/HMI full-disk magnetic-field observations. We show that our new algorithm is more than four times faster than the fastest algorithm that provides the disambiguation with a satisfactory accuracy (NPFC). At the same time, its accuracy is similar to that of the minimum-energy method (a very slow algorithm). In contrast to other codes, the NDA code maintains high accuracy when the region to be analyzed is very close to the limb.

  11. Detailed observations of the source of terrestrial narrowband electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.

    1982-01-01

    Detailed observations are presented of a region near the terrestrial plasmapause where narrowband electromagnetic radiation (previously called escaping nonthermal continuum radiation) is being generated. These observations show a direct correspondence between the narrowband radio emissions and electron cyclotron harmonic waves near the upper hybrid resonance frequency. In addition, electromagnetic radiation propagating in the Z-mode is observed in the source region which provides an extremely accurate determination of the electron plasma frequency and, hence, density profile of the source region. The data strongly suggest that electrostatic waves and not Cerenkov radiation are the source of the banded radio emissions and define the coupling which must be described by any viable theory.

  12. Seeing and Being Seen: Predictors of Accurate Perceptions about Classmates’ Relationships

    PubMed Central

    Neal, Jennifer Watling; Neal, Zachary P.; Cappella, Elise

    2015-01-01

    This study examines predictors of observer accuracy (i.e. seeing) and target accuracy (i.e. being seen) in perceptions of classmates’ relationships in a predominantly African American sample of 420 second through fourth graders (ages 7 – 11). Girls, children in higher grades, and children in smaller classrooms were more accurate observers. Targets (i.e. pairs of children) were more accurately observed when they occurred in smaller classrooms of higher grades and involved same-sex, high-popularity, and similar-popularity children. Moreover, relationships between pairs of girls were more accurately observed than relationships between pairs of boys. As a set, these findings suggest the importance of both observer and target characteristics for children’s accurate perceptions of classroom relationships. Moreover, the substantial variation in observer accuracy and target accuracy has methodological implications for both peer-reported assessments of classroom relationships and the use of stochastic actor-based models to understand peer selection and socialization processes. PMID:26347582

  13. Comparative evaluation of mathematical functions to describe growth and efficiency of phosphorus utilization in growing pigs.

    PubMed

    Kebreab, E; Schulin-Zeuthen, M; Lopez, S; Soler, J; Dias, R S; de Lange, C F M; France, J

    2007-10-01

    Success of pig production depends on maximizing return over feed costs and addressing potential nutrient pollution to the environment. Mathematical modeling has been used to describe many important aspects of inputs and outputs of pork production. This study was undertaken to compare 4 mathematical functions for the best fit in terms of describing specific data sets on pig growth and, in a separate experiment, to compare these 4 functions for describing of P utilization for growth. Two data sets with growth data were used to conduct growth analysis and another data set was used for P efficiency analysis. All data sets were constructed from independent trials that measured BW, age, and intake. Four growth functions representing diminishing returns (monomolecular), sigmoidal with a fixed point of inflection (Gompertz), and sigmoidal with a variable point of inflection (Richards and von Bertalanffy) were used. Meta-analysis of the data was conducted to identify the most appropriate functions for growth and P utilization. Based on Bayesian information criteria, the Richards equation described the BW vs. age data best. The additional parameter of the Richards equation was necessary because the data required a lower point of inflection (138 d) than the Gompertz, with a fixed point of inflexion at 1/e times the final BW (189 d), could accommodate. Lack of flexibility in the Gompertz equation was a limitation to accurate prediction. The monomolecular equation was best at determining efficiencies of P utilization for BW gain compared with the sigmoidal functions. The parameter estimate for the rate constant in all functions decreased as available P intake increased. Average efficiencies during different stages of growth were calculated and offer insight into targeting stages where high feed (nutrient) input is required and when adjustments are needed to accommodate the loss of efficiency and the reduction of potential pollution problems. It is recommended that the Richards

  14. Shipboard Weather Observation.

    ERIC Educational Resources Information Center

    Palmaccio, Richard J.

    1983-01-01

    Details of how observers on a moving ship can furnish an accurate report of wind velocity are provided. A method employing vector addition and some trigonometry is covered. Wind velocity is initially indicated through an anemometer and a wind vane. Ships are urged to radio weather data. (MP)

  15. Accurate transition rates for intercombination lines of singly ionized nitrogen

    NASA Astrophysics Data System (ADS)

    Tayal, S. S.

    2011-01-01

    The transition energies and rates for the 2s22p2 3P1,2-2s2p3 5S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p3 1,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  16. Accurate transition rates for intercombination lines of singly ionized nitrogen

    SciTech Connect

    Tayal, S. S.

    2011-01-15

    The transition energies and rates for the 2s{sup 2}2p{sup 2} {sup 3}P{sub 1,2}-2s2p{sup 3} {sup 5}S{sub 2}{sup o} and 2s{sup 2}2p3s-2s{sup 2}2p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p{sup 3} {sup 1,3}P{sub 1}{sup o} and 2s{sup 2}2p3s {sup 1,3}P{sub 1}{sup o}levels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  17. Accurate de novo design of hyperstable constrained peptides

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Gaurav; Mulligan, Vikram Khipple; Bahl, Christopher D.; Gilmore, Jason M.; Harvey, Peta J.; Cheneval, Olivier; Buchko, Garry W.; Pulavarti, Surya V. S. R. K.; Kaas, Quentin; Eletsky, Alexander; Huang, Po-Ssu; Johnsen, William A.; Greisen, Per, Jr.; Rocklin, Gabriel J.; Song, Yifan; Linsky, Thomas W.; Watkins, Andrew; Rettie, Stephen A.; Xu, Xianzhong; Carter, Lauren P.; Bonneau, Richard; Olson, James M.; Coutsias, Evangelos; Correnti, Colin E.; Szyperski, Thomas; Craik, David J.; Baker, David

    2016-10-01

    Naturally occurring, pharmacologically active peptides constrained with covalent crosslinks generally have shapes that have evolved to fit precisely into binding pockets on their targets. Such peptides can have excellent pharmaceutical properties, combining the stability and tissue penetration of small-molecule drugs with the specificity of much larger protein therapeutics. The ability to design constrained peptides with precisely specified tertiary structures would enable the design of shape-complementary inhibitors of arbitrary targets. Here we describe the development of computational methods for accurate de novo design of conformationally restricted peptides, and the use of these methods to design 18-47 residue, disulfide-crosslinked peptides, a subset of which are heterochiral and/or N-C backbone-cyclized. Both genetically encodable and non-canonical peptides are exceptionally stable to thermal and chemical denaturation, and 12 experimentally determined X-ray and NMR structures are nearly identical to the computational design models. The computational design methods and stable scaffolds presented here provide the basis for development of a new generation of peptide-based drugs.

  18. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  19. Raman Spectroscopy as an Accurate Probe of Defects in Graphene

    NASA Astrophysics Data System (ADS)

    Rodriguez-Nieva, Joaquin; Barros, Eduardo; Saito, Riichiro; Dresselhaus, Mildred

    2014-03-01

    Raman Spectroscopy has proved to be an invaluable non-destructive technique that allows us to obtain intrinsic information about graphene. Furthermore, defect-induced Raman features, namely the D and D' bands, have previously been used to assess the purity of graphitic samples. However, quantitative studies of the signatures of the different types of defects on the Raman spectra is still an open problem. Experimental results already suggest that the Raman intensity ratio ID /ID' may allow us to identify the nature of the defects. We study from a theoretical point of view the power and limitations of Raman spectroscopy in the study of defects in graphene. We derive an analytic model that describes the Double Resonance Raman process of disordered graphene samples, and which explicitly shows the role played by both the defect-dependent parameters as well as the experimentally-controlled variables. We compare our model with previous Raman experiments, and use it to guide new ways in which defects in graphene can be accurately probed with Raman spectroscopy. We acknowledge support from NSF grant DMR1004147.

  20. Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.

    PubMed

    Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong

    2015-09-01

    The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.

  1. Accurate de novo design of hyperstable constrained peptides.

    PubMed

    Bhardwaj, Gaurav; Mulligan, Vikram Khipple; Bahl, Christopher D; Gilmore, Jason M; Harvey, Peta J; Cheneval, Olivier; Buchko, Garry W; Pulavarti, Surya V S R K; Kaas, Quentin; Eletsky, Alexander; Huang, Po-Ssu; Johnsen, William A; Greisen, Per Jr; Rocklin, Gabriel J; Song, Yifan; Linsky, Thomas W; Watkins, Andrew; Rettie, Stephen A; Xu, Xianzhong; Carter, Lauren P; Bonneau, Richard; Olson, James M; Coutsias, Evangelos; Correnti, Colin E; Szyperski, Thomas; Craik, David J; Baker, David

    2016-10-20

    Naturally occurring, pharmacologically active peptides constrained with covalent crosslinks generally have shapes that have evolved to fit precisely into binding pockets on their targets. Such peptides can have excellent pharmaceutical properties, combining the stability and tissue penetration of small-molecule drugs with the specificity of much larger protein therapeutics. The ability to design constrained peptides with precisely specified tertiary structures would enable the design of shape-complementary inhibitors of arbitrary targets. Here we describe the development of computational methods for accurate de novo design of conformationally restricted peptides, and the use of these methods to design 18-47 residue, disulfide-crosslinked peptides, a subset of which are heterochiral and/or N-C backbone-cyclized. Both genetically encodable and non-canonical peptides are exceptionally stable to thermal and chemical denaturation, and 12 experimentally determined X-ray and NMR structures are nearly identical to the computational design models. The computational design methods and stable scaffolds presented here provide the basis for development of a new generation of peptide-based drugs.

  2. How accurate are the wrist-based heart rate monitors during walking and running activities? Are they accurate enough?

    PubMed Central

    An, Hyun-Sung; Dinkel, Danae M; Noble, John M; Lee, Jung-Min

    2016-01-01

    Background Heart rate (HR) monitors are valuable devices for fitness-orientated individuals. There has been a vast influx of optical sensing blood flow monitors claiming to provide accurate HR during physical activities. These monitors are worn on the arm and wrist to detect HR with photoplethysmography (PPG) techniques. Little is known about the validity of these wearable activity trackers. Aim Validate the Scosche Rhythm (SR), Mio Alpha (MA), Fitbit Charge HR (FH), Basis Peak (BP), Microsoft Band (MB), and TomTom Runner Cardio (TT) wireless HR monitors. Methods 50 volunteers (males: n=32, age 19–43 years; females: n=18, age 19–38 years) participated. All monitors were worn simultaneously in a randomised configuration. The Polar RS400 HR chest strap was the criterion measure. A treadmill protocol of one 30 min bout of continuous walking and running at 3.2, 4.8, 6.4, 8.0, and 9.6 km/h (5 min at each protocol speed) with HR manually recorded every minute was completed. Results For group comparisons, the mean absolute percentage error values were: 3.3%, 3.6%, 4.0%, 4.6%, 4.8% and 6.2% for TT, BP, RH, MA, MB and FH, respectively. Pearson product-moment correlation coefficient (r) was observed: r=0.959 (TT), r=0.956 (MB), r=0.954 (BP), r=0.933 (FH), r=0.930 (RH) and r=0.929 (MA). Results from 95% equivalency testing showed monitors were found to be equivalent to those of the criterion HR (±10% equivalence zone: 98.15–119.96). Conclusions The results demonstrate that the wearable activity trackers provide an accurate measurement of HR during walking and running activities. PMID:27900173

  3. Observation of the Earth's nutation by the VLBI: how accurate is the geophysical signal

    NASA Astrophysics Data System (ADS)

    Gattano, César; Lambert, Sébastien B.; Bizouard, Christian

    2016-09-01

    We compare nutation time series determined by several International VLBI Service for geodesy and astrometry (IVS) analysis centers. These series were made available through the International Earth Rotation and Reference Systems Service (IERS). We adjust the amplitudes of the main nutations, including the free motion associated with the free core nutation (FCN). Then, we discuss the results in terms of physics of the Earth's interior. We find consistent FCN signals in all of the time series, and we provide corrections to IAU 2000A series for a number of nutation terms with realistic errors. It appears that the analysis configuration or the software packages used by each analysis center introduce an error comparable to the amplitude of the prominent corrections. We show that the inconsistencies between series have significant consequences on our understanding of the Earth's deep interior, especially for the free inner core resonance: they induce an uncertainty on the FCN period of about 0.5 day, and on the free inner core nutation (FICN) period of more than 1000 days, comparable to the estimated period itself. Though the FCN parameters are not so much affected, a 100 % error shows up for the FICN parameters and prevents from geophysical conclusions.

  4. Use of negative binomial distribution to describe the presence of Anisakis in Thyrsites atun.

    PubMed

    Peña-Rehbein, Patricio; De los Ríos-Escalante, Patricio

    2012-01-01

    Nematodes of the genus Anisakis have marine fishes as intermediate hosts. One of these hosts is Thyrsites atun, an important fishery resource in Chile between 38 and 41° S. This paper describes the frequency and number of Anisakis nematodes in the internal organs of Thyrsites atun. An analysis based on spatial distribution models showed that the parasites tend to be clustered. The variation in the number of parasites per host could be described by the negative binomial distribution. The maximum observed number of parasites was nine parasites per host. The environmental and zoonotic aspects of the study are also discussed.

  5. Using Artifacts to Describe Instruction: Lessons Learned from Studying Reform-Oriented Instruction in Middle School Mathematics and Science. CSE Technical Report 705

    ERIC Educational Resources Information Center

    Borko, Hilda; Kuffner, Karin L.; Arnold, Suzanne C.; Creighton, Laura; Stecher, Brian M.; Martinez, Felipe; Barnes, Dionne; Gilbert, Mary Lou

    2007-01-01

    It is important to be able to describe instructional practices accurately in order to support research on "what works" in education and professional development as a basis for efforts to improve practice. This report describes a project to develop procedures for characterizing classroom practices in mathematics and science on the …

  6. Describing Myxococcus xanthus aggregation using Ostwald ripening equations for thin liquid films.

    PubMed

    Bahar, Fatmagül; Pratt-Szeliga, Philip C; Angus, Stuart; Guo, Jiaye; Welch, Roy D

    2014-09-18

    When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions.

  7. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  8. Accurate and reproducible detection of proteins in water using an extended-gate type organic transistor biosensor

    NASA Astrophysics Data System (ADS)

    Minamiki, Tsukuru; Minami, Tsuyoshi; Kurita, Ryoji; Niwa, Osamu; Wakida, Shin-ichi; Fukuda, Kenjiro; Kumaki, Daisuke; Tokito, Shizuo

    2014-06-01

    In this Letter, we describe an accurate antibody detection method using a fabricated extended-gate type organic field-effect-transistor (OFET), which can be operated at below 3 V. The protein-sensing portion of the designed device is the gate electrode functionalized with streptavidin. Streptavidin possesses high molecular recognition ability for biotin, which specifically allows for the detection of biotinylated proteins. Here, we attempted to detect biotinylated immunoglobulin G (IgG) and observed a shift of threshold voltage of the OFET upon the addition of the antibody in an aqueous solution with a competing bovine serum albumin interferent. The detection limit for the biotinylated IgG was 8 nM, which indicates the potential utility of the designed device in healthcare applications.

  9. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  10. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    NASA Astrophysics Data System (ADS)

    Granata, Daniele; Carnevale, Vincenzo

    2016-08-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.

  11. Describing Surfaces.

    DTIC Science & Technology

    1985-01-01

    constant, then it is made explicit. For example, the asymptote that marks the smooth join of the bulb and the stem of the lightbulb in Figure 1, as...illustrates the representation we are aiming at. The stem of the lightbulb is determined to be cylindrical, because it is ruled and because it is a surface...and threaded end. This distinguishes the diameters of each that are collinear with the stem axis, showing ,4 that the lightbulb is a surface of

  12. ALOS-PALSAR multi-temporal observation for describing land use and forest cover changes in Malaysia

    NASA Astrophysics Data System (ADS)

    Avtar, R.; Suzuki, R.; Ishii, R.; Kobayashi, H.; Nagai, S.; Fadaei, H.; Hirata, R.; Suhaili, A. B.

    2012-12-01

    The establishment of plantations in carbon rich peatland of Southeast Asia has shown an increase in the past decade. The need to support development in countries such as Malaysia has been reflected by having a higher rate of conversion of its forested areas to agricultural land use in particular oilpalm plantation. Use of optical data to monitor changes in peatland forests is difficult because of the high cloudiness in tropical region. Synthetic Aperture Radar (SAR) based remote sensing can potentially be used to monitor changes in such forested landscapes. In this study, we have demonstrated the capability of multi-temporal Fine-Beam Dual (FBD) data of Phased Array L-band Synthetic Aperture Radar (PALSAR) to detect forest cover changes in peatland to other landuse such as oilpalm plantation. Here, the backscattering properties of radar were evaluated to estimate changes in the forest cover. Temporal analysis of PALSAR FBD data shows that conversion of peatland forest to oilpalm can be detected by analyzing changes in the value of σoHH and σoHV. This is characterized by a high value of σoHH (-7.89 dB) and σoHV (-12.13 dB) for areas under peat forests. The value of σoHV decreased about 2-4 dB due to the conversion of peatland to a plantation area. There is also an increase in the value of σoHH/σoHV. Changes in σoHV is more prominent to identify the peatland conversion than in the σoHH. The results indicate the potential of PALSAR to estimate peatland forest conversion based on thresholding of σoHV or σoHH/σoHV for monitoring changes in peatland forest. This would improve our understanding of the temporal change and its effect on the peatland forest ecosystem.

  13. How Clean Are Hotel Rooms? Part I: Visual Observations vs. Microbiological Contamination.

    PubMed

    Almanza, Barbara A; Kirsch, Katie; Kline, Sheryl Fried; Sirsat, Sujata; Stroia, Olivia; Choi, Jin Kyung; Neal, Jay

    2015-01-01

    Current evidence of hotel room cleanliness is based on observation rather than empirically based microbial assessment. The purpose of the study described here was to determine if observation provides an accurate indicator of cleanliness. Results demonstrated that visual assessment did not accurately predict microbial contamination. Although testing standards have not yet been established for hotel rooms and will be evaluated in Part II of the authors' study, potential microbial hazards included the sponge and mop (housekeeping cart), toilet, bathroom floor, bathroom sink, and light switch. Hotel managers should increase cleaning in key areas to reduce guest exposure to harmful bacteria.

  14. Expected IPS variations due to a disturbance described by a 3-D MHD model

    NASA Technical Reports Server (NTRS)

    Tappin, S. J.; Dryer, M.; Han, S. M.; Wu, S. T.

    1988-01-01

    The variations of interplanetary scintillation due to a disturbance described by a three-dimensional, time-dependent, MHD model of the interplanetary medium are calculated. The resulting simulated IPS maps are compared with observations of real disturbances and it is found that there is some qualitative agreement. It is concluded that the MHD model with a more realistic choice of input conditions would probably provide a useful description of many interplanetary disturbances.

  15. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  16. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  17. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  18. Accurate quantification of supercoiled DNA by digital PCR.

    PubMed

    Dong, Lianhua; Yoo, Hee-Bong; Wang, Jing; Park, Sang-Ryoul

    2016-04-11

    Digital PCR (dPCR) as an enumeration-based quantification method is capable of quantifying the DNA copy number without the help of standards. However, it can generate false results when the PCR conditions are not optimized. A recent international comparison (CCQM P154) showed that most laboratories significantly underestimated the concentration of supercoiled plasmid DNA by dPCR. Mostly, supercoiled DNAs are linearized before dPCR to avoid such underestimations. The present study was conducted to overcome this problem. In the bilateral comparison, the National Institute of Metrology, China (NIM) optimized and applied dPCR for supercoiled DNA determination, whereas Korea Research Institute of Standards and Science (KRISS) prepared the unknown samples and quantified them by flow cytometry. In this study, several factors like selection of the PCR master mix, the fluorescent label, and the position of the primers were evaluated for quantifying supercoiled DNA by dPCR. This work confirmed that a 16S PCR master mix avoided poor amplification of the supercoiled DNA, whereas HEX labels on dPCR probe resulted in robust amplification curves. Optimizing the dPCR assay based on these two observations resulted in accurate quantification of supercoiled DNA without preanalytical linearization. This result was validated in close agreement (101~113%) with the result from flow cytometry.

  19. Accurate quantification of supercoiled DNA by digital PCR

    PubMed Central

    Dong, Lianhua; Yoo, Hee-Bong; Wang, Jing; Park, Sang-Ryoul

    2016-01-01

    Digital PCR (dPCR) as an enumeration-based quantification method is capable of quantifying the DNA copy number without the help of standards. However, it can generate false results when the PCR conditions are not optimized. A recent international comparison (CCQM P154) showed that most laboratories significantly underestimated the concentration of supercoiled plasmid DNA by dPCR. Mostly, supercoiled DNAs are linearized before dPCR to avoid such underestimations. The present study was conducted to overcome this problem. In the bilateral comparison, the National Institute of Metrology, China (NIM) optimized and applied dPCR for supercoiled DNA determination, whereas Korea Research Institute of Standards and Science (KRISS) prepared the unknown samples and quantified them by flow cytometry. In this study, several factors like selection of the PCR master mix, the fluorescent label, and the position of the primers were evaluated for quantifying supercoiled DNA by dPCR. This work confirmed that a 16S PCR master mix avoided poor amplification of the supercoiled DNA, whereas HEX labels on dPCR probe resulted in robust amplification curves. Optimizing the dPCR assay based on these two observations resulted in accurate quantification of supercoiled DNA without preanalytical linearization. This result was validated in close agreement (101~113%) with the result from flow cytometry. PMID:27063649

  20. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    PubMed Central

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel C.; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-01-01

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification. PMID:23499924

  1. A predictable and accurate technique with elastomeric impression materials.

    PubMed

    Barghi, N; Ontiveros, J C

    1999-08-01

    A method for obtaining more predictable and accurate final impressions with polyvinylsiloxane impression materials in conjunction with stock trays is proposed and tested. Heavy impression material is used in advance for construction of a modified custom tray, while extra-light material is used for obtaining a more accurate final impression.

  2. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  3. On canonical cylinder sections for accurate determination of contact angle in microgravity

    NASA Technical Reports Server (NTRS)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  4. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2012-12-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and were used to solve entirely different problems. We show that by combining two classical models, namely Boussinesq's Equation describing spring baseflow recession and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean residence time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater residence time that can refine those obtained from tritium measurements. This approach is demonstrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the best agreement between observed and predicted time of trend reversal was reached for the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating the stronger influence of continuous groundwater recharge during the dry period.

  5. The challenge of accurately documenting bee species richness in agroecosystems: bee diversity in eastern apple orchards.

    PubMed

    Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan

    2015-09-01

    Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured <75% of expected species richness at more than half of the sites. For most of these, the variation in bee community composition between years was greater than among sites. Species richness was influenced by percent agriculture, orchard size, and sampling effort, but we found no factors explaining the difference between observed and expected species richness. Competition between honeybees and wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness.

  6. Misestimation of temperature when applying Maxwellian distributions to space plasmas described by kappa distributions

    NASA Astrophysics Data System (ADS)

    Nicolaou, Georgios; Livadiotis, George

    2016-11-01

    This paper presents the misestimation of temperature when observations from a kappa distributed plasma are analyzed as a Maxwellian. One common method to calculate the space plasma parameters is by fitting the observed distributions using known analytical forms. More often, the distribution function is included in a forward model of the instrument's response, which is used to reproduce the observed energy spectrograms for a given set of plasma parameters. In both cases, the modeled plasma distribution fits the measurements to estimate the plasma parameters. The distribution function is often considered to be Maxwellian even though in many cases the plasma is better described by a kappa distribution. In this work we show that if the plasma is described by a kappa distribution, the derived temperature assuming Maxwell distribution can be significantly off. More specifically, we derive the plasma temperature by fitting a Maxwell distribution to pseudo-data produced by a kappa distribution, and then examine the difference of the derived temperature as a function of the kappa index. We further consider the concept of using a forward model of a typical plasma instrument to fit its observations. We find that the relative error of the derived temperature is highly depended on the kappa index and occasionally on the instrument's field of view and response.

  7. Accurate frequency referencing for fieldable dual-comb spectroscopy.

    PubMed

    Truong, Gar-Wing; Waxman, Eleanor M; Cossel, Kevin C; Baumann, Esther; Klose, Andrew; Giorgetta, Fabrizio R; Swann, William C; Newbury, Nathan R; Coddington, Ian

    2016-12-26

    We describe a dual-comb spectrometer that can operate independently of laboratory-based rf and optical frequency references but is nevertheless capable of ultra-high spectral resolution, high SNR, and frequency-accurate spectral measurements. The instrument is based on a "bootstrapped" frequency referencing scheme in which short-term optical phase coherence between combs is attained by referencing each to a free-running diode laser, whilst high frequency resolution and long-term accuracy is derived from a stable quartz oscillator. The sensitivity, stability and accuracy of this spectrometer were characterized using a multipass cell. We demonstrate comb-resolved spectra spanning from 140 THz (2.14 µm, 4670 cm-1) to 184 THz (1.63 µm, 6140 cm-1) in the near infrared with a frequency sampling of 200 MHz (0.0067 cm-1) and ~1 MHz frequency accuracy. High resolution spectra of water and carbon dioxide transitions at 1.77 µm, 1.96 µm and 2.06 µm show that the molecular transmission acquired with this system operating in the field-mode did not deviate from those measured when it was referenced to a maser and cavity-stabilized laser to within 5.6 × 10-4. When optimized for carbon dioxide quantification at 1.60 µm, a sensitivity of 2.8 ppm-km at 1 s integration time, improving to 0.10 ppm-km at 13 minutes of integration time was achieved.

  8. Accurate Classification of RNA Structures Using Topological Fingerprints

    PubMed Central

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  9. Accurate color images: from expensive luxury to essential resource

    NASA Astrophysics Data System (ADS)

    Saunders, David R.; Cupitt, John

    2002-06-01

    Over ten years ago the National Gallery in London began a program to make digital images of paintings in the collection using a colorimetric imaging system. This was to provide a permanent record of the state of paintings against which future images could be compared to determine if any changes had occurred. It quickly became apparent that such images could be used not only for scientific purposes, but also in applications where transparencies were then being used, for example as source materials for printed books and catalogues or for computer-based information systems. During the 1990s we were involved in the development of a series of digital cameras that have combined the high color accuracy of the original 'scientific' imaging system with the familiarity and portability of a medium format camera. This has culminated in the program of digitization now in progress at the National Gallery. By the middle of 2001 we will have digitized all the major paintings in the collection at a resolution of 10,000 pixels along their longest dimension and with calibrated color; we are on target to digitize the whole collection by the end of 2002. The images are available on-line within the museum for consultation and so that Gallery departments can use the images in printed publications and on the Gallery's web- site. We describe the development of the imaging systems used at National Gallery and how the research we have conducted into high-resolution accurate color imaging has developed from being a peripheral, if harmless, research activity to becoming a central part of the Gallery's information and publication strategy. Finally, we discuss some outstanding issues, such as interfacing our color management procedures with the systems used by external organizations.

  10. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    NASA Astrophysics Data System (ADS)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.

  11. Accurate identification of periodic oscillations buried in white or colored noise using fast orthogonal search.

    PubMed

    Chon, K H

    2001-06-01

    We use a previously introduced fast orthogonal search algorithm to detect sinusoidal frequency components buried in either white or colored noise. We show that the method outperforms the correlogram, modified covariance autoregressive (MODCOVAR) and multiple-signal classification (MUSIC) methods. Fast orthogonal search method achieves accurate detection of sinusoids even with signal-to-noise ratios as low as -10 dB, and is superior at detecting sinusoids buried in 1/f noise. Since the utilized method accurately detects sinusoids even under colored noise, it can be used to extract a 1/f noise process observed in physiological signals such as heart rate and renal blood pressure and flow data.

  12. Fabricating an Accurate Implant Master Cast: A Technique Report.

    PubMed

    Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F

    2015-12-01

    The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.

  13. An algorithm to detect and communicate the differences in computational models describing biological systems

    PubMed Central

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-01-01

    Motivation: Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model’s development over time. Results: Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models’ encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model’s history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. Availability and implementation: The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. Contact: martin.scharm@uni-rostock.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26490504

  14. Describing sequencing results of structural chromosome rearrangements with a suggested next-generation cytogenetic nomenclature.

    PubMed

    Ordulu, Zehra; Wong, Kristen E; Currall, Benjamin B; Ivanov, Andrew R; Pereira, Shahrin; Althari, Sara; Gusella, James F; Talkowski, Michael E; Morton, Cynthia C

    2014-05-01

    With recent rapid advances in genomic technologies, precise delineation of structural chromosome rearrangements at the nucleotide level is becoming increasingly feasible. In this era of "next-generation cytogenetics" (i.e., an integration of traditional cytogenetic techniques and next-generation sequencing), a consensus nomenclature is essential for accurate communication and data sharing. Currently, nomenclature for describing the sequencing data of these aberrations is lacking. Herein, we present a system called Next-Gen Cytogenetic Nomenclature, which is concordant with the International System for Human Cytogenetic Nomenclature (2013). This system starts with the alignment of rearrangement sequences by BLAT or BLAST (alignment tools) and arrives at a concise and detailed description of chromosomal changes. To facilitate usage and implementation of this nomenclature, we are developing a program designated BLA(S)T Output Sequence Tool of Nomenclature (BOSToN), a demonstrative version of which is accessible online. A standardized characterization of structural chromosomal rearrangements is essential both for research analyses and for application in the clinical setting.

  15. Quasistatic limit of the strong-field approximation describing atoms in intense laser fields: Circular polarization

    SciTech Connect

    Bauer, Jaroslaw H.

    2011-03-15

    In the recent work of Vanne and Saenz [Phys. Rev. A 75, 063403 (2007)] the quasistatic limit of the velocity gauge strong-field approximation describing the ionization rate of atomic or molecular systems exposed to linearly polarized laser fields was derived. It was shown that in the low-frequency limit the ionization rate is proportional to the laser frequency {omega} (for a constant intensity of the laser field). In the present work I show that for circularly polarized laser fields the ionization rate is proportional to {omega}{sup 4} for H(1s) and H(2s) atoms, to {omega}{sup 6} for H(2p{sub x}) and H(2p{sub y}) atoms, and to {omega}{sup 8} for H(2p{sub z}) atoms. The analytical expressions for asymptotic ionization rates (which become nearly accurate in the limit {omega}{yields}0) contain no summations over multiphoton contributions. For very low laser frequencies (optical or infrared) these expressions usually remain with an order-of-magnitude agreement with the velocity gauge strong-field approximation.

  16. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    NASA Astrophysics Data System (ADS)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  17. A safe and accurate method to perform esthetic mandibular contouring surgery for Far Eastern Asians.

    PubMed

    Hsieh, A M-C; Huon, L-K; Jiang, H-R; Liu, S Y-C

    2017-05-01

    A tapered mandibular contour is popular with Far Eastern Asians. This study describes a safe and accurate method of using preoperative virtual surgical planning (VSP) and an intraoperative ostectomy guide to maximize the esthetic outcomes of mandibular symmetry and tapering while mitigating injury to the inferior alveolar nerve (IAN). Twelve subjects with chief complaints of a wide and square lower face underwent this protocol from January to June 2015. VSP was used to confirm symmetry and preserve the IAN while maximizing the surgeon's ability to taper the lower face via mandibular inferior border ostectomy. The accuracy of this method was confirmed by superimposition of the perioperative computed tomography scans in all subjects. No subjects complained of prolonged paresthesia after 3 months. A safe and accurate protocol for achieving an esthetic lower face in indicated Far Eastern individuals is described.

  18. Using scale dependent variation in soil properties to describe soil landscape relationships through DSM

    NASA Astrophysics Data System (ADS)

    Corstanje, Ronald; Mayr, Thomas

    2016-04-01

    DSM formalizes the relationship between soil forming factors and the landscape in which they are formed and aims to capture and model the intrinsic spatial variability naturally observed in soils. Covariates, the landscape factors recognized as governing soil formation, vary at different scales and this spatial variation at some scales may be more strongly correlated with soil than at others. Soil forming factors have different domains with distinctive scales, for example geology operates at a coarser scale than land use. By understanding the quantitative relationships between soil and soil forming factors, and their scale dependency, we can start determining the importance of landscape level processes on the formation and observed variation in soils. Three study areas, covered by detailed reconnaissance soil survey, were identified in the Republic of Ireland. Their different pedological and geomorphological characteristics allowed to test scale dependent behaviors across the spectrum of conditions present in the Irish landscape. We considered here three approaches, i) an empirical diagnostic tool in which DSM was applied across a range of scales (20 to 260 m2), ii) the application of wavelets to decompose the DEMs into a series of independent components at varying scales and then used in DSM and finally, iii) a multiscale, window based geostatistical based approach. Applied as a diagnostic approach, we found that wavelets and window based, multiscale geostatistics were effective in identifying the main scales of interaction of the key soil landscape factors (e.g. terrain, geology, land use etc.) and in partitioning the landscape accordingly, we were able to accurately reproduce the observed spatial variation in soils.

  19. Teacher Observation Scales.

    ERIC Educational Resources Information Center

    Purdue Univ., Lafayette, IN. Educational Research Center.

    The Teacher Observation Scales include four instruments: Observer Rating Scale (ORS), Reading Strategies Check List, Arithmetic Strategies Check List, and Classroom Description. These instruments utilize trained observers to describe the teaching behavior, instructional strategies and physical characteristics in each classroom. On the ORS, teacher…

  20. Describing long-term trends in precipitation using generalized additive models

    NASA Astrophysics Data System (ADS)

    Underwood, Fiona M.

    2009-01-01

    SummaryWith the current concern over climate change, descriptions of how rainfall patterns are changing over time can be useful. Observations of daily rainfall data over the last few decades provide information on these trends. Generalized linear models are typically used to model patterns in the occurrence and intensity of rainfall. These models describe rainfall patterns for an average year but are more limited when describing long-term trends, particularly when these are potentially non-linear. Generalized additive models (GAMs) provide a framework for modelling non-linear relationships by fitting smooth functions to the data. This paper describes how GAMs can extend the flexibility of models to describe seasonal patterns and long-term trends in the occurrence and intensity of daily rainfall using data from Mauritius from 1962 to 2001. Smoothed estimates from the models provide useful graphical descriptions of changing rainfall patterns over the last 40 years at this location. GAMs are particularly helpful when exploring non-linear relationships in the data. Care is needed to ensure the choice of smooth functions is appropriate for the data and modelling objectives.

  1. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    MedlinePlus

    ... Library ▸ Hay fever and pollen counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has ... Pongdee, MD, FAAAAI Seasonal allergic rhinitis known as hay fever is caused by pollen carried in the air ...

  2. Digital system accurately controls velocity of electromechanical drive

    NASA Technical Reports Server (NTRS)

    Nichols, G. B.

    1965-01-01

    Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.

  3. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  4. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.

    1996-09-03

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.

  5. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.

    1996-01-01

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.

  6. Accurate GPS Time-Linked data Acquisition System (ATLAS II) user's manual.

    SciTech Connect

    Jones, Perry L.; Zayas, Jose R.; Ortiz-Moyet, Juan

    2004-02-01

    The Accurate Time-Linked data Acquisition System (ATLAS II) is a small, lightweight, time-synchronized, robust data acquisition system that is capable of acquiring simultaneous long-term time-series data from both a wind turbine rotor and ground-based instrumentation. This document is a user's manual for the ATLAS II hardware and software. It describes the hardware and software components of ATLAS II, and explains how to install and execute the software.

  7. Enumerating the Progress of SETI Observations

    NASA Astrophysics Data System (ADS)

    Lesh, Lindsay; Tarter, Jill C.

    2015-01-01

    In a long-term project like SETI, accurate archiving of observations is imperative. This requires a database that is both easy to search - in order to know what data has or hasn't been acquired - and easy to update, no matter what form the results of an observation might be reported in. If the data can all be standardized, then the parameters of the nine-dimensional search space (including space, time, frequency (and bandwidth), sensitivity, polarization and modulation scheme) of completed observations for engineered signals can be calculated and compared to the total possible search volume. Calculating a total search volume that includes more than just spatial dimensions needs an algorithm that can adapt to many different variables, (e.g. each receiving instrument's capabilities). The method of calculation must also remain consistent when applied to each new SETI observation if an accurate fraction of the total search volume is to be found. Any planned observations can be evaluated against what has already been done in order to assess the efficacy of a new search. Progress against a desired goal can be evaluated, and the significance of null results can be properly understood.This paper describes a new, user-friendly archive and standardized computational tool that are being built at the SETI Institute in order to greatly ease the addition of new entries and the calculation of the search volume explored to date. The intent is to encourage new observers to better report the parameters and results of their observations, and to improve public understanding of ongoing progress and the importance of continuing the search for ETI signals into the future.

  8. Accurate free and forced rotational motions of rigid Venus

    NASA Astrophysics Data System (ADS)

    Cottereau, L.; Souchay, J.; Aljbaae, S.

    2010-06-01

    Context. The precise and accurate modelling of a terrestrial planet like Venus is an exciting and challenging topic, all the more interesting because it can be compared with that of Earth for which such a modelling has already been achieved at the milli-arcsecond level. Aims: We aim to complete a previous study, by determining the polhody at the milli-arcsecond level, i.e. the torque-free motion of the angular momentum axis of a rigid Venus in a body-fixed frame, as well as the nutation of its third axis of figure in space, which is fundamental from an observational point of view. Methods: We use the same theoretical framework as Kinoshita (1977, Celest. Mech., 15, 277) did to determine the precession-nutation motion of a rigid Earth. It is based on a representation of the rotation of a rigid Venus, with the help of Andoyer variables and a set of canonical equations in Hamiltonian formalism. Results: In a first part we computed the polhody, we showed that this motion is highly elliptical, with a very long period of 525 cy compared with 430 d for the Earth. This is due to the very small dynamical flattening of Venus in comparison with our planet. In a second part we precisely computed the Oppolzer terms, which allow us to represent the motion in space of the third Venus figure axis with respect to the Venus angular momentum axis under the influence of the solar gravitational torque. We determined the corresponding tables of the nutation coefficients of the third figure axis both in longitude and in obliquity due to the Sun, which are of the same order of amplitude as for the Earth. We showed that the nutation coefficients for the third figure axis are significantly different from those of the angular momentum axis on the contrary of the Earth. Our analytical results have been validated by a numerical integration, which revealed the indirect planetary effects.

  9. Accurate source location from waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  10. Accurate Sound Velocity Measurement in Ocean Near-Surface Layer

    NASA Astrophysics Data System (ADS)

    Lizarralde, D.; Xu, B. L.

    2015-12-01

    Accurate sound velocity measurement is essential in oceanography because sound is the only wave that can propagate in sea water. Due to its measuring difficulties, sound velocity is often not measured directly but instead calculated from water temperature, salinity, and depth, which are much easier to obtain. This research develops a new method to directly measure the sound velocity in the ocean's near-surface layer using multi-channel seismic (MCS) hydrophones. This system consists of a device to make a sound pulse and a long cable with hundreds of hydrophones to record the sound. The distance between the source and each receiver is the offset. The time it takes the pulse to arrive to each receiver is the travel time.The errors of measuring offset and travel time will affect the accuracy of sound velocity if we calculated with just one offset and one travel time. However, by analyzing the direct arrival signal from hundreds of receivers, the velocity can be determined as the slope of a straight line in the travel time-offset graph. The errors in distance and time measurement result in only an up or down shift of the line and do not affect the slope. This research uses MCS data of survey MGL1408 obtained from the Marine Geoscience Data System and processed with Seismic Unix. The sound velocity can be directly measured to an accuracy of less than 1m/s. The included graph shows the directly measured velocity verses the calculated velocity along 100km across the Mid-Atlantic continental margin. The directly measured velocity shows a good coherence to the velocity computed from temperature and salinity. In addition, the fine variations in the sound velocity can be observed, which is hardly seen from the calculated velocity. Using this methodology, both large area acquisition and fine resolution can be achieved. This directly measured sound velocity will be a new and powerful tool in oceanography.

  11. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  12. Accurate molecular structure and spectroscopic properties for nucleobases: A combined computational - microwave investigation of 2-thiouracil as a case study

    PubMed Central

    Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L.

    2015-01-01

    The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier Transform Microwave spectrometers. The joint experimental – computational study allowed us to determine accurate molecular structure and spectroscopic properties for the title molecule, but more important, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules. PMID:24002739

  13. An adaptive, formally second order accurate version of the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.

    2007-04-01

    Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves

  14. History and progress on accurate measurements of the Planck constant

    NASA Astrophysics Data System (ADS)

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  15. History and progress on accurate measurements of the Planck constant.

    PubMed

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  16. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  17. Homogeneous Diffusion Solid Model as a Realistic Approach to Describe Adsorption onto Materials with Different Geometries

    NASA Astrophysics Data System (ADS)

    Sabio, E.; Zamora, F.; González-García, C. M.; Ledesma, B.; Álvarez-Murillo, A.; Román, S.

    2016-12-01

    In this work, the adsorption kinetics of p-nitrophenol (PNP) onto several commercial activated carbons (ACs) with different textural and geometrical characteristics was studied. For this aim, a homogeneous diffusion solid model (HDSM) was used, which does take the adsorbent shape into account. The HDSM was solved by means of the finite element method (FEM) using the commercial software COMSOL. The different kinetic patterns observed in the experiments carried out can be described by the developed model, which shows that the sharp drop of adsorption rate observed in some samples is caused by the formation of a concentration wave. The model allows one to visualize the changes in concentration taking place in both liquid and solid phases, which enables us to link the kinetic behaviour with the main features of the carbon samples.

  18. Homogeneous Diffusion Solid Model as a Realistic Approach to Describe Adsorption onto Materials with Different Geometries.

    PubMed

    Sabio, E; Zamora, F; González-García, C M; Ledesma, B; Álvarez-Murillo, A; Román, S

    2016-12-01

    In this work, the adsorption kinetics of p-nitrophenol (PNP) onto several commercial activated carbons (ACs) with different textural and geometrical characteristics was studied. For this aim, a homogeneous diffusion solid model (HDSM) was used, which does take the adsorbent shape into account. The HDSM was solved by means of the finite element method (FEM) using the commercial software COMSOL. The different kinetic patterns observed in the experiments carried out can be described by the developed model, which shows that the sharp drop of adsorption rate observed in some samples is caused by the formation of a concentration wave. The model allows one to visualize the changes in concentration taking place in both liquid and solid phases, which enables us to link the kinetic behaviour with the main features of the carbon samples.

  19. Merging quantum-chemistry with B-splines to describe molecular photoionization

    NASA Astrophysics Data System (ADS)

    Argenti, L.; Marante, C.; Klinker, M.; Corral, I.; Gonzalez, J.; Martin, F.

    2016-05-01

    Theoretical description of observables in attosecond pump-probe experiments requires a good representation of the system's ionization continuum. For polyelectronic atoms and molecules, however, this is still a challenge, due to the complicated short-range structure of correlated electronic wavefunctions. Whereas quantum chemistry packages (QCP) implementing sophisticated methods to compute bound electronic molecular states are well established, comparable tools for the continuum are not widely available yet. To tackle this problem, we have developed a new approach that, by means of a hybrid Gaussian-B-spline basis, interfaces existing QCPs with close-coupling scattering methods. To illustrate the viability of this approach, we report results for the multichannel ionization of the helium atom and of the hydrogen molecule that are in excellent agreement with existing accurate benchmarks. These findings, together with the flexibility of QCPs, make of this approach a good candidate for the theoretical study of the ionization of poly-electronic systems. FP7/ERC Grant XCHEM 290853.

  20. Accurate body composition measures from whole-body silhouettes

    PubMed Central

    Xie, Bowen; Avila, Jesus I.; Ng, Bennett K.; Fan, Bo; Loo, Victoria; Gilsanz, Vicente; Hangartner, Thomas; Kalkwarf, Heidi J.; Lappe, Joan; Oberfield, Sharon; Winer, Karen; Zemel, Babette; Shepherd, John A.

    2015-01-01

    Purpose: Obesity and its consequences, such as diabetes, are global health issues that burden about 171 × 106 adult individuals worldwide. Fat mass index (FMI, kg/m2), fat-free mass index (FFMI, kg/m2), and percent fat mass may be useful to evaluate under- and overnutrition and muscle development in a clinical or research environment. This proof-of-concept study tested whether frontal whole-body silhouettes could be used to accurately measure body composition parameters using active shape modeling (ASM) techniques. Methods: Binary shape images (silhouettes) were generated from the skin outline of dual-energy x-ray absorptiometry (DXA) whole-body scans of 200 healthy children of ages from 6 to 16 yr. The silhouette shape variation from the average was described using an ASM, which computed principal components for unique modes of shape. Predictive models were derived from the modes for FMI, FFMI, and percent fat using stepwise linear regression. The models were compared to simple models using demographics alone [age, sex, height, weight, and body mass index z-scores (BMIZ)]. Results: The authors found that 95% of the shape variation of the sampled population could be explained using 26 modes. In most cases, the body composition variables could be predicted similarly between demographics-only and shape-only models. However, the combination of shape with demographics improved all estimates of boys and girls compared to the demographics-only model. The best prediction models for FMI, FFMI, and percent fat agreed with the actual measures with R2 adj. (the coefficient of determination adjusted for the number of parameters used in the model equation) values of 0.86, 0.95, and 0.75 for boys and 0.90, 0.89, and 0.69 for girls, respectively. Conclusions: Whole-body silhouettes in children may be useful to derive estimates of body composition including FMI, FFMI, and percent fat. These results support the feasibility of measuring body composition variables from simple

  1. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  2. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  3. Nonexposure accurate location K-anonymity algorithm in LBS.

    PubMed

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  4. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  5. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses.

    PubMed

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  6. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2013-05-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and have been used to solve entirely different problems. We show that by combining two classical models, namely the Boussinesq equation describing spring baseflow recession, and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean transit time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater transit time that can refine those obtained from tritium measurements. The approach is illustrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the actual time of trend reversal and the rate of change agreed extremely well with the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating a stronger influence of continuous groundwater recharge during the summer months.

  7. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  8. Ligand-Induced Protein Responses and Mechanical Signal Propagation Described by Linear Response Theories

    PubMed Central

    Yang, Lee-Wei; Kitao, Akio; Huang, Bang-Chieh; Gō, Nobuhiro

    2014-01-01

    In this study, a general linear response theory (LRT) is formulated to describe time-dependent and -independent protein conformational changes upon CO binding with myoglobin. Using the theory, we are able to monitor protein relaxation in two stages. The slower relaxation is found to occur from 4.4 to 81.2 picoseconds and the time constants characterized for a couple of aromatic residues agree with those observed by UV Resonance Raman (UVRR) spectrometry and time resolved x-ray crystallography. The faster “early responses”, triggered as early as 400 femtoseconds, can be best described by the theory when impulse forces are used. The newly formulated theory describes the mechanical propagation following ligand-binding as a function of time, space and types of the perturbation forces. The “disseminators”, defined as the residues that propagate signals throughout the molecule the fastest among all the residues in protein when perturbed, are found evolutionarily conserved and the mutations of which have been shown to largely change the CO rebinding kinetics in myoglobin. PMID:25229149

  9. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  10. First-principles-based multiscale, multiparadigm molecular mechanics and dynamics methods for describing complex chemical processes.

    PubMed

    Jaramillo-Botero, Andres; Nielsen, Robert; Abrol, Ravi; Su, Julius; Pascal, Tod; Mueller, Jonathan; Goddard, William A

    2012-01-01

    We expect that systematic and seamless computational upscaling and downscaling for modeling, predicting, or optimizing material and system properties and behavior with atomistic resolution will eventually be sufficiently accurate and practical that it will transform the mode of development in the materials, chemical, catalysis, and Pharma industries. However, despite truly dramatic progress in methods, software, and hardware, this goal remains elusive, particularly for systems that exhibit inherently complex chemistry under normal or extreme conditions of temperature, pressure, radiation, and others. We describe here some of the significant progress towards solving these problems via a general multiscale, multiparadigm strategy based on first-principles quantum mechanics (QM), and the development of breakthrough methods for treating reaction processes, excited electronic states, and weak bonding effects on the conformational dynamics of large-scale molecular systems. These methods have resulted directly from filling in the physical and chemical gaps in existing theoretical and computational models, within the multiscale, multiparadigm strategy. To illustrate the procedure we demonstrate the application and transferability of such methods on an ample set of challenging problems that span multiple fields, system length- and timescales, and that lay beyond the realm of existing computational or, in some case, experimental approaches, including understanding the solvation effects on the reactivity of organic and organometallic structures, predicting transmembrane protein structures, understanding carbon nanotube nucleation and growth, understanding the effects of electronic excitations in materials subjected to extreme conditions of temperature and pressure, following the dynamics and energetics of long-term conformational evolution of DNA macromolecules, and predicting the long-term mechanisms involved in enhancing the mechanical response of polymer-based hydrogels.

  11. Accurate Cell Division in Bacteria: How Does a Bacterium Know Where its Middle Is?

    NASA Astrophysics Data System (ADS)

    Howard, Martin; Rutenberg, Andrew

    2004-03-01

    I will discuss the physical principles lying behind the acquisition of accurate positional information in bacteria. A good application of these ideas is to the rod-shaped bacterium E. coli which divides precisely at its cellular midplane. This positioning is controlled by the Min system of proteins. These proteins coherently oscillate from end to end of the bacterium. I will present a reaction-diffusion model that describes the diffusion of the Min proteins, and their binding/unbinding from the cell membrane. The system possesses an instability that spontaneously generates the Min oscillations, which control accurate placement of the midcell division site. I will then discuss the role of fluctuations in protein dynamics, and investigate whether fluctuations set optimal protein concentration levels. Finally I will examine cell division in a different bacteria, B. subtilis. where different physical principles are used to regulate accurate cell division. See: Howard, Rutenberg, de Vet: Dynamic compartmentalization of bacteria: accurate division in E. coli. Phys. Rev. Lett. 87 278102 (2001). Howard, Rutenberg: Pattern formation inside bacteria: fluctuations due to the low copy number of proteins. Phys. Rev. Lett. 90 128102 (2003). Howard: A mechanism for polar protein localization in bacteria. J. Mol. Biol. 335 655-663 (2004).

  12. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  13. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  14. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  15. Development of a theoretical model describing sonoporation activity of cells exposed to ultrasound in the presence of contrast agents

    PubMed Central

    Forbes, Monica M.; O’Brien, William D.

    2012-01-01

    Sonoporation uses ultrasound, with the aid of ultrasound contrast agents (UCAs), to enhance cell permeabilization, thereby allowing delivery of therapeutic compounds noninvasively into specific target cells. The objective of this study was to determine if a computational model describing shear stress on a cell membrane due to microstreaming would successfully reflect sonoporation activity with respect to the peak rarefactional pressure. The theoretical models were compared to the sonoporation results from Chinese hamster ovary cells using Definity® at 0.9, 3.15, and 5.6 MHz and were found to accurately describe the maximum sonoporation activity, the pressure where a decrease in sonoporation activity occurs, and relative differences between maximum activity and the activity after that decrease. Therefore, the model supports the experimental findings that shear stress on cell membranes secondary to oscillating UCAs results in sonoporation. PMID:22501051

  16. 43 CFR 3832.12 - When I record a mining claim or site, how do I describe the lands I have claimed?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...; or (B) A narrative or sketch describing the claim or site and tying the description to a natural... quarter section accurately enough for BLM to identify the mining claims or sites on the ground. (iii) You... or sites are clearly identified. (iv) You are not required to employ a professional surveyor...

  17. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  18. Fast and accurate line scanner based on white light interferometry

    NASA Astrophysics Data System (ADS)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  19. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  20. Must Kohn-Sham oscillator strengths be accurate at threshold?

    SciTech Connect

    Yang Zenghui; Burke, Kieron; Faassen, Meta van

    2009-09-21

    The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.

  1. Accurate torque-speed performance prediction for brushless dc motors

    NASA Astrophysics Data System (ADS)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  2. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  3. In-line sensor for accurate rf power measurements

    NASA Astrophysics Data System (ADS)

    Gahan, D.; Hopkins, M. B.

    2005-10-01

    An in-line sensor has been constructed with 50Ω characteristic impedance to accurately measure rf power dissipated in a matched or unmatched load with a view to being implemented as a rf discharge diagnostic. The physical construction and calibration technique are presented. The design is a wide band, hybrid directional coupler/current-voltage sensor suitable for fundamental and harmonic power measurements. A comparison with a standard wattmeter using dummy load impedances shows that this in-line sensor is significantly more accurate in mismatched conditions.

  4. In-line sensor for accurate rf power measurements

    SciTech Connect

    Gahan, D.; Hopkins, M.B.

    2005-10-15

    An in-line sensor has been constructed with 50 {omega} characteristic impedance to accurately measure rf power dissipated in a matched or unmatched load with a view to being implemented as a rf discharge diagnostic. The physical construction and calibration technique are presented. The design is a wide band, hybrid directional coupler/current-voltage sensor suitable for fundamental and harmonic power measurements. A comparison with a standard wattmeter using dummy load impedances shows that this in-line sensor is significantly more accurate in mismatched conditions.

  5. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  6. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  7. Generating Accurate Urban Area Maps from Nighttime Satellite (DMSP/OLS) Data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc; Lawrence, William; Elvidge, Christopher

    2000-01-01

    There has been an increasing interest by the international research community to use the nighttime acquired "city-lights" data sets collected by the US Defense Meteorological Satellite Program's Operational Linescan system to study issues relative to urbanization. Many researchers are interested in using these data to estimate human demographic parameters over large areas and then characterize the interactions between urban development , natural ecosystems, and other aspects of the human enterprise. Many of these attempts rely on an ability to accurately identify urbanized area. However, beyond the simple determination of the loci of human activity, using these data to generate accurate estimates of urbanized area can be problematic. Sensor blooming and registration error can cause large overestimates of urban land based on a simple measure of lit area from the raw data. We discuss these issues, show results of an attempt to do a historical urban growth model in Egypt, and then describe a few basic processing techniques that use geo-spatial analysis to threshold the DMSP data to accurately estimate urbanized areas. Algorithm results are shown for the United States and an application to use the data to estimate the impact of urban sprawl on sustainable agriculture in the US and China is described.

  8. Observing Double Stars

    NASA Astrophysics Data System (ADS)

    Genet, Russell M.; Fulton, B. J.; Bianco, Federica B.; Martinez, John; Baxter, John; Brewer, Mark; Carro, Joseph; Collins, Sarah; Estrada, Chris; Johnson, Jolyon; Salam, Akash; Wallen, Vera; Warren, Naomi; Smith, Thomas C.; Armstrong, James D.; McGaughey, Steve; Pye, John; Mohanan, Kakkala; Church, Rebecca

    2012-05-01

    Double stars have been systematically observed since William Herschel initiated his program in 1779. In 1803 he reported that, to his surprise, many of the systems he had been observing for a quarter century were gravitationally bound binary stars. In 1830 the first binary orbital solution was obtained, leading eventually to the determination of stellar masses. Double star observations have been a prolific field, with observations and discoveries - often made by students and amateurs - routinely published in a number of specialized journals such as the Journal of Double Star Observations. All published double star observations from Herschel's to the present have been incorporated in the Washington Double Star Catalog. In addition to reviewing the history of visual double stars, we discuss four observational technologies and illustrate these with our own observational results from both California and Hawaii on telescopes ranging from small SCTs to the 2-meter Faulkes Telescope North on Haleakala. Two of these technologies are visual observations aimed primarily at published "hands-on" student science education, and CCD observations of both bright and very faint doubles. The other two are recent technologies that have launched a double star renaissance. These are lucky imaging and speckle interferometry, both of which can use electron-multiplying CCD cameras to allow short (30 ms or less) exposures that are read out at high speed with very low noise. Analysis of thousands of high speed exposures allows normal seeing limitations to be overcome so very close doubles can be accurately measured.

  9. Scaling laws describe memories of host-pathogen riposte in the HIV population.

    PubMed

    Barton, John P; Kardar, Mehran; Chakraborty, Arup K

    2015-02-17

    The enormous genetic diversity and mutability of HIV has prevented effective control of this virus by natural immune responses or vaccination. Evolution of the circulating HIV population has thus occurred in response to diverse, ultimately ineffective, immune selection pressures that randomly change from host to host. We show that the interplay between the diversity of human immune responses and the ways that HIV mutates to evade them results in distinct sets of sequences defined by similar collectively coupled mutations. Scaling laws that relate these sets of sequences resemble those observed in linguistics and other branches of inquiry, and dynamics reminiscent of neural networks are observed. Like neural networks that store memories of past stimulation, the circulating HIV population stores memories of host-pathogen combat won by the virus. We describe an exactly solvable model that captures the main qualitative features of the sets of sequences and a simple mechanistic model for the origin of the observed scaling laws. Our results define collective mutational pathways used by HIV to evade human immune responses, which could guide vaccine design.

  10. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  11. Device accurately measures and records low gas-flow rates

    NASA Technical Reports Server (NTRS)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  12. Ultrasonic system for accurate distance measurement in the air.

    PubMed

    Licznerski, Tomasz J; Jaroński, Jarosław; Kosz, Dariusz

    2011-12-01

    This paper presents a system that accurately measures the distance travelled by ultrasound waves through the air. The simple design of the system and its obtained accuracy provide a tool for non-contact distance measurements required in the laser's optical system that investigates the surface of the eyeball.

  13. Monitoring circuit accurately measures movement of solenoid valve

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1966-01-01

    Solenoid operated valve in a control system powered by direct current issued to accurately measure the valve travel. This system is currently in operation with a 28-vdc power system used for control of fluids in liquid rocket motor test facilities.

  14. Instrument accurately measures small temperature changes on test surface

    NASA Technical Reports Server (NTRS)

    Harvey, W. D.; Miller, H. B.

    1966-01-01

    Calorimeter apparatus accurately measures very small temperature rises on a test surface subjected to aerodynamic heating. A continuous thin sheet of a sensing material is attached to a base support plate through which a series of holes of known diameter have been drilled for attaching thermocouples to the material.

  15. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  16. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  17. Ellipsoidal-mirror reflectometer accurately measures infrared reflectance of materials

    NASA Technical Reports Server (NTRS)

    Dunn, S. T.; Richmond, J. C.

    1967-01-01

    Reflectometer accurately measures the reflectance of specimens in the infrared beyond 2.5 microns and under geometric conditions approximating normal irradiation and hemispherical viewing. It includes an ellipsoidal mirror, a specially coated averaging sphere associated with a detector for minimizing spatial and angular sensitivity, and an incident flux chopper.

  18. Second-order accurate nonoscillatory schemes for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  19. How Accurate Are Judgments of Intelligence by Strangers?

    ERIC Educational Resources Information Center

    Borkenau, Peter

    Whether judgments made by complete strangers as to the intelligence of subjects are accurate or merely illusory was studied in Germany. Target subjects were 50 female and 50 male adults recruited through a newspaper article. Eighteen judges, who did not know the subjects, were recruited from a university community. Videorecordings of the subjects,…

  20. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  1. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  2. A fast and accurate algorithm for ℓ 1 minimization problems in compressive sampling

    NASA Astrophysics Data System (ADS)

    Chen, Feishe; Shen, Lixin; Suter, Bruce W.; Xu, Yuesheng

    2015-12-01

    An accurate and efficient algorithm for solving the constrained ℓ 1-norm minimization problem is highly needed and is crucial for the success of sparse signal recovery in compressive sampling. We tackle the constrained ℓ 1-norm minimization problem by reformulating it via an indicator function which describes the constraints. The resulting model is solved efficiently and accurately by using an elegant proximity operator-based algorithm. Numerical experiments show that the proposed algorithm performs well for sparse signals with magnitudes over a high dynamic range. Furthermore, it performs significantly better than the well-known algorithm NESTA (a shorthand for Nesterov's algorithm) and DADM (dual alternating direction method) in terms of the quality of restored signals and the computational complexity measured in the CPU-time consumed.

  3. Accurate protein crystallography at ultra-high resolution: Valence electron distribution in crambin

    PubMed Central

    Jelsch, Christian; Teeter, Martha M.; Lamzin, Victor; Pichon-Pesme, Virginie; Blessing, Robert H.; Lecomte, Claude

    2000-01-01

    The charge density distribution of a protein has been refined experimentally. Diffraction data for a crambin crystal were measured to ultra-high resolution (0.54 Å) at low temperature by using short-wavelength synchrotron radiation. The crystal structure was refined with a model for charged, nonspherical, multipolar atoms to accurately describe the molecular electron density distribution. The refined parameters agree within 25% with our transferable electron density library derived from accurate single crystal diffraction analyses of several amino acids and small peptides. The resulting electron density maps of redistributed valence electrons (deformation maps) compare quantitatively well with a high-level quantum mechanical calculation performed on a monopeptide. This study provides validation for experimentally derived parameters and a window into charge density analysis of biological macromolecules. PMID:10737790

  4. Accurate protein crystallography at ultra-high resolution: valence electron distribution in crambin.

    PubMed

    Jelsch, C; Teeter, M M; Lamzin, V; Pichon-Pesme, V; Blessing, R H; Lecomte, C

    2000-03-28

    The charge density distribution of a protein has been refined experimentally. Diffraction data for a crambin crystal were measured to ultra-high resolution (0.54 A) at low temperature by using short-wavelength synchrotron radiation. The crystal structure was refined with a model for charged, nonspherical, multipolar atoms to accurately describe the molecular electron density distribution. The refined parameters agree within 25% with our transferable electron density library derived from accurate single crystal diffraction analyses of several amino acids and small peptides. The resulting electron density maps of redistributed valence electrons (deformation maps) compare quantitatively well with a high-level quantum mechanical calculation performed on a monopeptide. This study provides validation for experimentally derived parameters and a window into charge density analysis of biological macromolecules.

  5. Learning accurate and concise naïve Bayes classifiers from attribute value taxonomies and data

    PubMed Central

    Kang, D.-K.; Silvescu, A.; Honavar, V.

    2009-01-01

    In many application domains, there is a need for learning algorithms that can effectively exploit attribute value taxonomies (AVT)—hierarchical groupings of attribute values—to learn compact, comprehensible and accurate classifiers from data—including data that are partially specified. This paper describes AVT-NBL, a natural generalization of the naïve Bayes learner (NBL), for learning classifiers from AVT and data. Our experimental results show that AVT-NBL is able to generate classifiers that are substantially more compact and more accurate than those produced by NBL on a broad range of data sets with different percentages of partially specified values. We also show that AVT-NBL is more efficient in its use of training data: AVT-NBL produces classifiers that outperform those produced by NBL using substantially fewer training examples. PMID:20351793

  6. WARP: accurate retrieval of shapes using phase of fourier descriptors and time warping distance.

    PubMed

    Bartolini, Ilaria; Ciaccia, Paolo; Patella, Marco

    2005-01-01

    Effective and efficient retrieval of similar shapes from large image databases is still a challenging problem in spite of the high relevance that shape information can have in describing image contents. In this paper, we propose a novel Fourier-based approach, called WARP, for matching and retrieving similar shapes. The unique characteristics of WARP are the exploitation of the phase of Fourier coefficients and the use of the Dynamic Time Warping (DTW) distance to compare shape descriptors. While phase information provides a more accurate description of object boundaries than using only the amplitude of Fourier coefficients, the DTW distance permits us to accurately match images even in the presence of (limited) phase shiftings. In terms of classical precision/recall measures, we experimentally demonstrate that WARP can gain, say, up to 35 percent in precision at a 20 percent recall level with respect to Fourier-based techniques that use neither phase nor DTW distance.

  7. Towards numerically accurate many-body perturbation theory: Short-range correlation effects

    SciTech Connect

    Gulans, Andris

    2014-10-28

    The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.

  8. The Concerned Observer Experiment.

    ERIC Educational Resources Information Center

    Rabiger, Michael

    1991-01-01

    Describes a classroom experiment--the "concerned observer" experiment--for production students that dramatizes basic film language by relating it to several levels of human observation. Details the experiment's three levels, and concludes that film language mimics wide-ranging states of human emotion and ideological persuasion. (PRA)

  9. Accurate Radiometry from Space: An Essential Tool for Climate Studies

    NASA Technical Reports Server (NTRS)

    Fox, Nigel; Kaiser-Weiss, Andrea; Schmutz, Werner; Thome, Kurtis; Young, Dave; Wielicki, Bruce; Winkler, Rainer; Woolliams, Emma

    2011-01-01

    The Earth s climate is undoubtedly changing; however, the time scale, consequences and causal attribution remain the subject of significant debate and uncertainty. Detection of subtle indicators from a background of natural variability requires measurements over a time base of decades. This places severe demands on the instrumentation used, requiring measurements of sufficient accuracy and sensitivity that can allow reliable judgements to be made decades apart. The International System of Units (SI) and the network of National Metrology Institutes were developed to address such requirements. However, ensuring and maintaining SI traceability of sufficient accuracy in instruments orbiting the Earth presents a significant new challenge to the metrology community. This paper highlights some key measurands and applications driving the uncertainty demand of the climate community in the solar reflective domain, e.g. solar irradiances and reflectances/radiances of the Earth. It discusses how meeting these uncertainties facilitate significant improvement in the forecasting abilities of climate models. After discussing the current state of the art, it describes a new satellite mission, called TRUTHS, which enables, for the first time, high-accuracy SI traceability to be established in orbit. The direct use of a primary standard and replication of the terrestrial traceability chain extends the SI into space, in effect realizing a metrology laboratory in space . Keywords: climate change; Earth observation; satellites; radiometry; solar irradiance

  10. How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?

    NASA Astrophysics Data System (ADS)

    Lin, Zesen; Fang, Guanwen; Kong, Xu

    2016-12-01

    Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.

  11. Earth Observation

    NASA Technical Reports Server (NTRS)

    1994-01-01

    For pipeline companies, mapping, facilities inventory, pipe inspections, environmental reporting, etc. is a monumental task. An Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) is the solution. However, this is costly and time consuming. James W. Sewall Company, an AM/FM/GIS consulting firm proposed an EOCAP project to Stennis Space Center (SSC) to develop a computerized system for storage and retrieval of digital aerial photography. This would provide its customer, Algonquin Gas Transmission Company, with an accurate inventory of rights-of-way locations and pipeline surroundings. The project took four years to complete and an important byproduct was SSC's Digital Aerial Rights-of-Way Monitoring System (DARMS). DARMS saves substantial time and money. EOCAP enabled Sewall to develop new products and expand its customer base. Algonquin now manages regulatory requirements more efficiently and accurately. EOCAP provides government co-funding to encourage private investment in and broader use of NASA remote sensing technology. Because changes on Earth's surface are accelerating, planners and resource managers must assess the consequences of change as quickly and accurately as possible. Pacific Meridian Resources and NASA's Stennis Space Center (SSC) developed a system for monitoring changes in land cover and use, which incorporated the latest change detection technologies. The goal of this EOCAP project was to tailor existing technologies to a system that could be commercialized. Landsat imagery enabled Pacific Meridian to identify areas that had sustained substantial vegetation loss. The project was successful and Pacific Meridian's annual revenues have substantially increased. EOCAP provides government co-funding to encourage private investment in and broader use of NASA remote sensing technology.

  12. Management of stroke as described by Ibn Sina (Avicenna) in the Canon of Medicine.

    PubMed

    Zargaran, Arman; Zarshenas, Mohammad M; Karimi, Aliasghar; Yarmohammadi, Hassan; Borhani-Haghighi, Afshin

    2013-11-15

    Stroke or cerebrovascular accident (CVA) is caused by a disturbance of the blood supply to the brain and an accruing loss of brain function. The first recorded observations were in 2455 BC and it has been studied intensely by ancient physicians throughout history. In the early medieval period, Ibn Sina (980-1025 AD) called stroke sekteh and described it extensively. Some of Ibn Sina's definitions and his etiology of stroke are based on humoral theories and cannot be compared with medical current concepts, but most of his descriptions concur with current definitions. This review examines the definition and etiology, clinical manifestations, prognosis, differential diagnosis, and interventions for stroke based on Ibn Sina's epic work, Canon of Medicine. The pharmacological effects of medicinal herbs suggested by Ibn Sina for stroke are examined in light of current knowledge.

  13. Theoretical study of production of unique glasses in space. [kinetic relationships describing nucleation and crystallization phenomena

    NASA Technical Reports Server (NTRS)

    Larsen, D. C.; Sievert, J. L.

    1975-01-01

    The potential of producing the glassy form of selected materials in the weightless, containerless nature of space processing is examined through the development of kinetic relationships describing nucleation and crystallization phenomena. Transformation kinetics are applied to a well-characterized system (SiO2), an excellent glass former (B2O3), and a poor glass former (Al2O3) by conventional earth processing methods. Viscosity and entropy of fusion are shown to be the primary materials parameters controlling the glass forming tendency. For multicomponent systems diffusion-controlled kinetics and heterogeneous nucleation effects are considered. An analytical empirical approach is used to analyze the mullite system. Results are consistent with experimentally observed data and indicate the promise of mullite as a future space processing candidate.

  14. Using a task analysis to describe nursing work in acute care patient environments.

    PubMed

    Battisto, Dina; Pak, Richard; Vander Wood, Melissa A; Pilcher, June J

    2009-12-01

    To improve the healthcare environment where nurses work and patients receive care, it is necessary to understand the elements that define the healthcare environment. Primary elements include (a) the occupants of the room and what knowledge, skills, and abilities they bring to the situation; (b) what tasks the occupants will be doing in the room; and (c) the characteristics of the built environment. To better understand these components, a task analysis from human factor research was conducted to study nurses as they cared for hospitalized patients. Multiple methods, including a review of nursing textbooks, observations, and interviews, were used to describe nurses' capabilities, nursing activities, and the environmental problems with current patient room models. Findings from this initial study are being used to inform the design and evaluation of an inpatient room prototype and to generate future research in improving clinical environments to support nursing productivity.

  15. Toward Universal Half-Saturation Coefficients: Describing Extant K(s) as a Function of Diffusion.

    PubMed

    Shaw, Andrew; Takacs, Imre; Pagilla, Krishna; Riffat, Rumana; DeClippeleir, Haydee; Wilson, Christopher; Murthy, Sudhir

    2015-05-01

    Observed (extant) K(s) is not a constant and it is strongly influenced by diffusion. This paper argues that diffusion can be used to describe bacterial kinetic effects that are sometimes attributed to "K-strategists" and, in fact, the physics of the system is the dominant mechanism affecting the apparent (extant) Ks--not intrinsic biological characteristics--in real water resource recovery facility systems. Four different biological processes have been modeled using the "porter-diffusion" model that was originally developed by Pasciak and Gavis (1974) for aquatic systems. The results demonstrate that diffusion is the dominant mechanism affecting K(s) in all four biological processes. Therefore, the authors argue that for treatment processes in which substrate concentrations are low, it is important to consider shifting to variable extant K(s) values or explicitly modeling the effects of diffusion.

  16. Model-independent inference on compact-binary observations

    NASA Astrophysics Data System (ADS)

    Mandel, Ilya; Farr, Will M.; Colonna, Andrea; Stevenson, Simon; Tiňo, Peter; Veitch, John

    2017-03-01

    The recent advanced LIGO detections of gravitational waves from merging binary black holes enhance the prospect of exploring binary evolution via gravitational-wave observations of a population of compact-object binaries. In the face of uncertainty about binary formation models, model-independent inference provides an appealing alternative to comparisons between observed and modelled populations. We describe a procedure for clustering in the multidimensional parameter space of observations that are subject to significant measurement errors. We apply this procedure to a mock data set of population-synthesis predictions for the masses of merging compact binaries convolved with realistic measurement uncertainties, and demonstrate that we can accurately distinguish subpopulations of binary neutron stars, binary black holes, and mixed neutron star-black hole binaries with tens of observations.

  17. A springy pendulum could describe the swing leg kinetics of human walking.

    PubMed

    Song, Hyunggwi; Park, Heewon; Park, Sukyung

    2016-06-14

    The dynamics of human walking during various walking conditions could be qualitatively captured by the springy legged dynamics, which have been used as a theoretical framework for bipedal robotics applications. However, the spring-loaded inverted pendulum model describes the motion of the center of mass (CoM), which combines the torso, swing and stance legs together and does not explicitly inform us as to whether the inter-limb dynamics share the springy legged dynamics characteristics of the CoM. In this study, we examined whether the swing leg dynamics could also be represented by springy mechanics and whether the swing leg stiffness shows a dependence on gait speed, as has been observed in CoM mechanics during walking. The swing leg was modeled as a spring-loaded pendulum hinged at the hip joint, which is under forward motion. The model parameters of the loaded mass were adopted from body parameters and anthropometric tables, whereas the free model parameters for the rest length of the spring and its stiffness were estimated to best match the data for the swing leg joint forces. The joint forces of the swing leg were well represented by the springy pendulum model at various walking speeds with a regression coefficient of R(2)>0.8. The swing leg stiffness increased with walking speed and was correlated with the swing frequency, which is consistent with previous observations from CoM dynamics described using the compliant leg. These results suggest that the swing leg also shares the springy dynamics, and the compliant walking model could be extended to better present swing leg dynamics.

  18. Quantitative law describing market dynamics before and after interest-rate change

    NASA Astrophysics Data System (ADS)

    Petersen, Alexander M.; Wang, Fengzhong; Havlin, Shlomo; Stanley, H. Eugene

    2010-06-01

    We study the behavior of U.S. markets both before and after U.S. Federal Open Market Commission meetings and show that the announcement of a U.S. Federal Reserve rate change causes a financial shock, where the dynamics after the announcement is described by an analog of the Omori earthquake law. We quantify the rate n(t) of aftershocks following an interest-rate change at time T and find power-law decay which scales as n(t-T)˜(t-T)-Ω , with Ω positive. Surprisingly, we find that the same law describes the rate n'(|t-T|) of “preshocks” before the interest-rate change at time T . This study quantitatively relates the size of the market response to the news which caused the shock and uncovers the presence of quantifiable preshocks. We demonstrate that the news associated with interest-rate change is responsible for causing both the anticipation before the announcement and the surprise after the announcement. We estimate the magnitude of financial news using the relative difference between the U.S. Treasury Bill and the Federal Funds effective rate. Our results are consistent with the “sign effect,” in which “bad news” has a larger impact than “good news.” Furthermore, we observe significant volatility aftershocks, confirming a “market under-reaction” that lasts at least one trading day.

  19. Quantitative law describing market dynamics before and after interest-rate change.

    PubMed

    Petersen, Alexander M; Wang, Fengzhong; Havlin, Shlomo; Stanley, H Eugene

    2010-06-01

    We study the behavior of U.S. markets both before and after U.S. Federal Open Market Commission meetings and show that the announcement of a U.S. Federal Reserve rate change causes a financial shock, where the dynamics after the announcement is described by an analog of the Omori earthquake law. We quantify the rate n(t) of aftershocks following an interest-rate change at time T and find power-law decay which scales as n(t-T)∼(t-T)(-Ω) , with Ω positive. Surprisingly, we find that the same law describes the rate n'(|t-T|) of "preshocks" before the interest-rate change at time T . This study quantitatively relates the size of the market response to the news which caused the shock and uncovers the presence of quantifiable preshocks. We demonstrate that the news associated with interest-rate change is responsible for causing both the anticipation before the announcement and the surprise after the announcement. We estimate the magnitude of financial news using the relative difference between the U.S. Treasury Bill and the Federal Funds effective rate. Our results are consistent with the "sign effect," in which "bad news" has a larger impact than "good news." Furthermore, we observe significant volatility aftershocks, confirming a "market under-reaction" that lasts at least one trading day.

  20. Quantitative law describing market dynamics before and after interest-rate change

    SciTech Connect

    Petersen, Alexander M.; Wang Fengzhong; Stanley, H. Eugene; Havlin, Shlomo

    2010-06-15

    We study the behavior of U.S. markets both before and after U.S. Federal Open Market Commission meetings and show that the announcement of a U.S. Federal Reserve rate change causes a financial shock, where the dynamics after the announcement is described by an analog of the Omori earthquake law. We quantify the rate n(t) of aftershocks following an interest-rate change at time T and find power-law decay which scales as n(t-T)approx(t-T){sup -O}MEGA, with OMEGA positive. Surprisingly, we find that the same law describes the rate n{sup '}(|t-T|) of 'preshocks' before the interest-rate change at time T. This study quantitatively relates the size of the market response to the news which caused the shock and uncovers the presence of quantifiable preshocks. We demonstrate that the news associated with interest-rate change is responsible for causing both the anticipation before the announcement and the surprise after the announcement. We estimate the magnitude of financial news using the relative difference between the U.S. Treasury Bill and the Federal Funds effective rate. Our results are consistent with the 'sign effect', in which 'bad news' has a larger impact than 'good news'. Furthermore, we observe significant volatility aftershocks, confirming a 'market under-reaction' that lasts at least one trading day.

  1. An integrated glucose-insulin model to describe oral glucose tolerance test data in healthy volunteers.

    PubMed

    Silber, Hanna E; Frey, Nicolas; Karlsson, Mats O

    2010-03-01

    The extension of the previously developed integrated models for glucose and insulin (IGI) to include the oral glucose tolerance test (OGTT) in healthy volunteers could be valuable to better understand the differences between healthy individuals and those with type 2 diabetes mellitus (T2DM). Data from an OGTT in 23 healthy volunteers were used. Analysis was based on the previously developed intravenous model with extensions for glucose absorption and incretin effect on insulin secretion. The need for additional structural components was evaluated. The model was evaluated by simulation and a bootstrap. Multiple glucose and insulin concentration peaks were observed in most individuals as well as hypoglycemic episodes in the second half of the experiment. The OGTT data were successfully described by the extended basic model. An additional control mechanism of insulin on glucose production improved the description of the data. The model showed good predictive properties, and parameters were estimated with good precision. In conclusion, a previously presented integrated model has been extended to describe glucose and insulin concentrations in healthy volunteers following an OGTT. The characterization of the differences between the healthy and diabetic stages in the IGI model could potentially be used to extrapolate drug effect from healthy volunteers to T2DM.

  2. Application of hydrographic and surface current data to describe water properties in the Porsangerfjorden, Norway

    NASA Astrophysics Data System (ADS)

    Cieszyńska, Agata; Białogrodzka, Jagoda; Yngve Børsheim, Knut; Stramska, Małgorzata; Jankowski, Andrzej

    2016-04-01

    This presentation is a part of the NORDFLUX project, and describes some of the results from experimental work carried out in 2014 in the Porsangerfjorden located in the area of the European Arctic. The fjord borders the Barents Sea. This is a region of high climatic sensitivity and our interest in the basin stemmed from this fact. One of our long-term goals is to develop an improved understanding of the undergoing changes and interactions between this fjord and the large-scale atmospheric and oceanic conditions. In present work we focus on data sets collected with High Frequency (HF) radars monitoring surface currents in the outer part of the Porsnagerfjorden. In our analysis we also use data on water salinity and temperature collected as part of the NORDFLUX experiment, and data from sea level and meteorological station located in Honningsvaag. Analysis of data sets enabled us to describe water salinity, temperature, density distributions and their variability. What is more, we have related aforementioned results to tides, meteorological conditions, and sea surface currents speed and directions. During the poster session the Author will present the schemes of water masses movement in the area of interest. This work was funded by the Norway Grants through the Polish-Norwegian Research Programme, National Centre for Research and Development (contract No. 201985). Project title: 'Application of in situ observations, high frequency radars, and ocean color, to study suspended matter, particulate carbon, and dissolved organic carbon fluxes in coastal waters of the Barents Sea'.

  3. 77 FR 43127 - Further Amendment to Memorandum Describing Authority and Assigned Responsibilities of the General...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-23

    ... BOARD Further Amendment to Memorandum Describing Authority and Assigned Responsibilities of the General... Relations Board is amending the memorandum describing the authority and assigned responsibilities of the... amendment to Board memorandum describing the authority and assigned responsibilities of the General...

  4. DNA barcode data accurately assign higher spider taxa.

    PubMed

    Coddington, Jonathan A; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina; Kuntner, Matjaž

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios "barcodes" (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families-taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75-100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of the

  5. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  6. Fluorescent labeling reliably identifies Chlamydia trachomatis in living human endometrial cells and rapidly and accurately quantifies chlamydial inclusion forming units.

    PubMed

    Vicetti Miguel, Rodolfo D; Henschel, Kevin J; Dueñas Lopez, Fiorela C; Quispe Calla, Nirk E; Cherpes, Thomas L

    2015-12-01

    Chlamydia replication requires host lipid acquisition, allowing flow cytometry to identify Chlamydia-infected cells that accumulated fluorescent Golgi-specific lipid. Herein, we describe modifications to currently available methods that allow precise differentiation between uninfected and Chlamydia trachomatis-infected human endometrial cells and rapidly and accurately quantify chlamydial inclusion forming units.

  7. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    ERIC Educational Resources Information Center

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  8. Fluorescent labeling reliably identifies Chlamydia trachomatis in living human endometrial cells and rapidly and accurately quantifies chlamydial inclusion forming units

    PubMed Central

    Vicetti Miguel, Rodolfo D.; Henschel, Kevin J.; Dueñas Lopez, Fiorela C.; Quispe Calla, Nirk E.; Cherpes, Thomas L.

    2016-01-01

    Chlamydia replication requires host lipid acquisition, allowing flow cytometry to identify C. trachomatis-infected cells that accumulated fluorescent Golgi-specific lipid. Herein, we describe modifications to currently available methods that allow precise differentiation between uninfected and C. trachomatis-infected human endometrial cells and rapidly and accurately quantify chlamydial inclusion forming units. PMID:26453947

  9. Describing the apprenticeship of chemists through the language of faculty scientists

    NASA Astrophysics Data System (ADS)

    Skjold, Brandy Ann

    Attempts to bring authentic science into the K-16 classroom have led to the use of sociocultural theories of learning, particularly apprenticeship, to frame science education research. Science educators have brought apprenticeship to science classrooms and have brought students to research laboratories in order to gauge its benefits. The assumption is that these learning opportunities are representative of the actual apprenticeship of scientists. However, there have been no attempts in the literature to describe the apprenticeship of scientists using apprenticeship theory. Understanding what science apprenticeship looks like is a critical component of translating this experience into the classroom. This study sought to describe and analyze the apprenticeship of chemists through the talk of faculty scientists. It used Lave and Wenger’s (1991) theory of Legitimate Peripheral Participation as its framework, concentrating on describing the roles of the participants, the environment and the tasks in the apprenticeship, as per Barab, Squire and Dueber (2000). A total of nine chemistry faculty and teaching assistants were observed across 11 settings representing a range of learning experiences from introductory chemistry lectures to research laboratories. All settings were videotaped, focusing on the instructor. About 89 hours of video was taken, along with observer field notes. All videos were transcribed and transcriptions and field notes were analyzed qualitatively as a broad level discourse analysis. Findings suggest that learners are expected to know basic chemistry content and how to use basic research equipment before entering the research lab. These are taught extensively in classroom settings. However, students are also required to know how to use the literature base to inform their own research, though they were rarely exposed to this in the classrooms. In all settings, conflicts occurred when student under or over-estimated their role in the learning

  10. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  11. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-07

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics.

  12. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  13. Multimodal spatial calibration for accurately registering EEG sensor positions.

    PubMed

    Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  14. Accurate measurement of the helical twisting power of chiral dopants

    NASA Astrophysics Data System (ADS)

    Kosa, Tamas; Bodnar, Volodymyr; Taheri, Bahman; Palffy-Muhoray, Peter

    2002-03-01

    We propose a method for the accurate determination of the helical twisting power (HTP) of chiral dopants. In the usual Cano-wedge method, the wedge angle is determined from the far-field separation of laser beams reflected from the windows of the test cell. Here we propose to use an optical fiber based spectrometer to accurately measure the cell thickness. Knowing the cell thickness at the positions of the disclination lines allows determination of the HTP. We show that this extension of the Cano-wedge method greatly increases the accuracy with which the HTP is determined. We show the usefulness of this method by determining the HTP of ZLI811 in a variety of hosts with negative dielectric anisotropy.

  15. Accurate van der Waals coefficients from density functional theory

    PubMed Central

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  16. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  17. Accurate Development of Thermal Neutron Scattering Cross Section Libraries

    SciTech Connect

    Hawari, Ayman; Dunn, Michael

    2014-06-10

    The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.

  18. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  19. Uniformly high order accurate essentially non-oscillatory schemes 3

    NASA Technical Reports Server (NTRS)

    Harten, A.; Engquist, B.; Osher, S.; Chakravarthy, S. R.

    1986-01-01

    In this paper (a third in a series) the construction and the analysis of essentially non-oscillatory shock capturing methods for the approximation of hyperbolic conservation laws are presented. Also presented is a hierarchy of high order accurate schemes which generalizes Godunov's scheme and its second order accurate MUSCL extension to arbitrary order of accuracy. The design involves an essentially non-oscillatory piecewise polynomial reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is derived from a new interpolation technique that when applied to piecewise smooth data gives high-order accuracy whenever the function is smooth but avoids a Gibbs phenomenon at discontinuities. Unlike standard finite difference methods this procedure uses an adaptive stencil of grid points and consequently the resulting schemes are highly nonlinear.

  20. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.