Sample records for accurately describe observables

  1. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF.more » We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.« less

  2. The variance needed to accurately describe jump height from vertical ground reaction force data.

    PubMed

    Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran

    2014-12-01

    In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.

  3. The X3LYP extended density functional accurately describes H-bonding but fails completely for stacking.

    PubMed

    Cerný, Jirí; Hobza, Pavel

    2005-04-21

    The performance of the recently introduced X3LYP density functional which was claimed to significantly improve the accuracy for H-bonded and van der Waals complexes was tested for extended H-bonded and stacked complexes (nucleic acid base pairs and amino acid pairs). In the case of planar H-bonded complexes (guanine...cytosine, adenine...thymine) the DFT results nicely agree with accurate correlated ab initio results. For the stacked pairs (uracil dimer, cytosine dimer, adenine...thymine and guanine...cytosine) the DFT fails completely and it was even not able to localize any minimum at the stacked subspace of the potential energy surface. The geometry optimization of all these stacked clusters leads systematically to the planar H-bonded pairs. The amino acid pairs were investigated in the crystal geometry. DFT again strongly underestimates the accurate correlated ab initio stabilization energies and usually it was not able to describe the stabilization of a pair. The X3LYP functional thus behaves similarly to other current functionals. Stacking of nucleic acid bases as well as interaction of amino acids was described satisfactorily by using the tight-binding DFT method, which explicitly covers the London dispersion energy.

  4. Many participants in inpatient rehabilitation can quantify their exercise dosage accurately: an observational study.

    PubMed

    Scrivener, Katharine; Sherrington, Catherine; Schurr, Karl; Treacy, Daniel

    2011-01-01

    Are inpatients undergoing rehabilitation who appear able to count exercises able to quantify accurately the amount of exercise they undertake? Observational study. Inpatients in an aged care rehabilitation unit and a neurological rehabilitation unit, who appeared able to count their exercises during a 1-2 min observation by their treating physiotherapist. Participants were observed for 30 min by an external observer while they exercised in the physiotherapy gymnasium. Both the participants and the observer counted exercise repetitions with a hand-held tally counter and the two tallies were compared. Of the 60 people admitted for aged care rehabilitation during the study period, 49 (82%) were judged by their treating therapist to be able to count their own exercise repetitions accurately. Of the 30 people admitted for neurological rehabilitation during the study period, 20 (67%) were judged by their treating therapist to be able to count their repetitions accurately. Of the 69 people judged to be accurate, 40 underwent observation while exercising. There was excellent agreement between these participants' counts of their exercise repetitions and the observers' counts, ICC (3,1) of 0.99 (95% CI 0.98 to 0.99). Eleven participants (28%) were in complete agreement with the observer. A further 19 participants (48%) varied from the observer by less than 10%. Therapists were able to identify a group of rehabilitation participants who were accurate in counting their exercise repetitions. Counting of exercise repetitions by therapist-selected patients is a valid means of quantifying exercise dosage during inpatient rehabilitation. Copyright © 2011 Australian Physiotherapy Association. Published by .. All rights reserved.

  5. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  6. 'Scalp coordinate system': a new tool to accurately describe cutaneous lesions on the scalp: a pilot study.

    PubMed

    Alexander, William; Miller, George; Alexander, Preeya; Henderson, Michael A; Webb, Angela

    2018-06-12

    Skin cancers are extremely common and the incidence increases with age. Care for patients with multiple or complicated skin cancers often require multidisciplinary input involving a general practitioner, dermatologist, plastic surgeon and/or radiation oncologist. Timely, efficient care of these patients relies on precise and effective communication between all parties. Until now, descriptions regarding the location of lesions on the scalp have been inaccurate, which can lead to error with the incorrect lesion being excised or biopsied. A novel technique for accurately and efficiently describing the location of lesions on the scalp, using a coordinate system, is described (the 'scalp coordinate system' (SCS)). This method was tested in a pilot study by clinicians typically involved in the care of patients with cutaneous malignancies. A mannequin scalp was used in the study. The SCS significantly improved the accuracy in the ability to both describe and locate lesions on the scalp. This improved accuracy comes at a minor time cost. The direct and indirect costs arising from poor communication between medical subspecialties (particularly relevant in surgical procedures) are immense. An effective tool used by all involved clinicians is long overdue particularly in patients with scalps with extensive actinic damage, scarring or innocuous biopsy sites. The SCS provides the opportunity to improve outcomes for both the patient and healthcare system. © 2018 Royal Australasian College of Surgeons.

  7. How Accurately Can We Map SEP Observations Using L*?

    NASA Astrophysics Data System (ADS)

    Young, S. L.; Kress, B. T.

    2016-12-01

    In a dipole the cutoff rigidities at a given location are inversely proportional to L2. Smart and Shea, 1967 showed that this was approximately true at low altitudes using the McIlwain L parameter (Lm) in realistic magnetospheric models and provided heuristic evidence that it was also true at high altitudes. Later models developed by Smart and Shea and others (Ogliore et al., 2001, Neal et al., 2013, Selesnick et al., 2015) also use this relationship at low altitudes. Only the Smart and Shea model (Smart and Shea, 2006) uses this relationship to extrapolate to high altitudes, but they introduce a correction that yields a 1 MeV proton vertical cutoff at geosynchronous. Recent work mapped POES observations to the Van Allen Probes locations as a function of L* (Young et al., 2015). The comparison between mapped and observed was reasonably good, but this mapping was along L* and only attempted to account for differences in shielding between high and low latitude. No attempt was made to map across L* so the inverse squared relationship was not tested. These previous results suggest that L* may be useful for mapping flux observations between satellites at high altitudes. In this study we calculate cutoffs and L* shells in a Tsyganenko 2005 + IGRF magnetic field model to examine how accurately L* based mapping can be used in different regions of the magnetosphere.

  8. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cimpoesu, Dorin, E-mail: cdorin@uaic.ro; Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  9. Normalisation theory: Does it accurately describe temporal changes in adolescent drunkenness and smoking?

    PubMed

    Sznitman, Sharon R; Zlotnick, Cheryl; Harel-Fisch, Yossi

    2016-07-01

    The multiple risk model postulates that accumulating risk factors increase adolescent drunkenness and smoking. The normalisation theory adds to this by arguing that the relation between accumulative risk and drunkenness and smoking is dependent on the distribution of these behaviours in the larger population. More concretely, normalisation theory predicts that: (i) when population level use increases, low risk adolescents will be more likely to use alcohol and cigarettes; and (ii) adolescents facing multiple risk factors will be equally likely to use alcohol and cigarettes, regardless of trends in population level use. The current study empirically tests these assumptions on five waves of nationally representative samples of Israeli Jewish youth. Five cross-sectional waves of data from the Israeli Health Behaviour in School-aged Children survey for Jewish 10th graders were used. Logistic regression models measured the impact of changes in population level use across waves on drunkenness and smoking, and their association with differing levels of risk factors. Between zero and two risk factors, the risk of drunkenness and smoking increases for each additional risk factor. When reaching two risk factors, added risk does not significantly increase the likelihood of smoking and drunkenness. Changes in population level drunkenness and smoking did not systematically relate to changes in the individual level relationship between risk factors and smoking and drunkenness. The pattern of results in this study provides strong evidence for the multiple risk factor model and inconsistent evidence for the normalisation theory. [Sznitman SR, Zlotnick C, Harel-Fisch Y. Normalisation theory: Does it accurately describe temporal changes in adolescent drunkenness and smoking? Drug Alcohol Rev 2016;35:424-432]. © 2015 Australasian Professional Society on Alcohol and other Drugs.

  10. Can accurate kinetic laws be created to describe chemical weathering?

    NASA Astrophysics Data System (ADS)

    Schott, Jacques; Oelkers, Eric H.; Bénézeth, Pascale; Goddéris, Yves; François, Louis

    2012-11-01

    Knowledge of the mechanisms and rates of mineral dissolution and growth, especially close to equilibrium, is essential for describing the temporal and spatial evolution of natural processes like weathering and its impact on CO2 budget and climate. The Surface Complexation approach (SC) combined with Transition State Theory (TST) provides an efficient framework for describing mineral dissolution over wide ranges of solution composition, chemical affinity, and temperature. There has been a large debate for several years, however, about the comparative merits of SC/TS versus classical growth theories for describing mineral dissolution and growth at near-to-equilibrium conditions. This study considers recent results obtained in our laboratory on oxides, hydroxides, silicates, and carbonates on near-equilibrium dissolution and growth via the combination of complementary microscopic and macroscopic techniques including hydrothermal atomic force microscopy, hydrogen-electrode concentration cell, mixed flow and batch reactors. Results show that the dissolution and precipitation of hydroxides, kaolinite, and hydromagnesite powders of relatively high BET surface area closely follow SC/TST rate laws with a linear dependence of both dissolution and growth rates on fluid saturation state (Ω) even at very close to equilibrium conditions (|ΔG| < 500 J/mol). This occurs because sufficient reactive sites (e.g. at kink, steps, and edges) are available at the exposed faces for dissolution and/or growth, allowing reactions to proceed via the direct and reversible detachment/attachment of reactants at the surface. In contrast, for magnesite and quartz, which have low surface areas, fewer active sites are available for growth and dissolution. Such minerals exhibit rates dependencies on Ω at near equilibrium conditions ranging from linear to highly non-linear functions of Ω, depending on the treatment of the crystals before the reaction. It follows that the form of the f

  11. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    NASA Astrophysics Data System (ADS)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  12. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a largemore » number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and

  13. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  14. Constructing simple yet accurate potentials for describing the solvation of HCl/water clusters in bulk helium and nanodroplets.

    PubMed

    Boese, A Daniel; Forbert, Harald; Masia, Marco; Tekin, Adem; Marx, Dominik; Jansen, Georg

    2011-08-28

    The infrared spectroscopy of molecules, complexes, and molecular aggregates dissolved in superfluid helium clusters, commonly called HElium NanoDroplet Isolation (HENDI) spectroscopy, is an established, powerful experimental technique for extracting high resolution ro-vibrational spectra at ultra-low temperatures. Realistic quantum simulations of such systems, in particular in cases where the solute is undergoing a chemical reaction, require accurate solute-helium potentials which are also simple enough to be efficiently evaluated over the vast number of steps required in typical Monte Carlo or molecular dynamics sampling. This precludes using global potential energy surfaces as often parameterized for small complexes in the realm of high-resolution spectroscopic investigations that, in view of the computational effort imposed, are focused on the intermolecular interaction of rigid molecules with helium. Simple Lennard-Jones-like pair potentials, on the other hand, fall short in providing the required flexibility and accuracy in order to account for chemical reactions of the solute molecule. Here, a general scheme of constructing sufficiently accurate site-site potentials for use in typical quantum simulations is presented. This scheme employs atom-based grids, accounts for local and global minima, and is applied to the special case of a HCl(H(2)O)(4) cluster solvated by helium. As a first step, accurate interaction energies of a helium atom with a set of representative configurations sampled from a trajectory following the dissociation of the HCl(H(2)O)(4) cluster were computed using an efficient combination of density functional theory and symmetry-adapted perturbation theory, i.e. the DFT-SAPT approach. For each of the sampled cluster configurations, a helium atom was placed at several hundred positions distributed in space, leading to an overall number of about 400,000 such quantum chemical calculations. The resulting total interaction energies, decomposed into

  15. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  16. Evaluation of Geographic Indices Describing Health Care Utilization.

    PubMed

    Kim, Agnus M; Park, Jong Heon; Kang, Sungchan; Kim, Yoon

    2017-01-01

    The accurate measurement of geographic patterns of health care utilization is a prerequisite for the study of geographic variations in health care utilization. While several measures have been developed to measure how accurately geographic units reflect the health care utilization patterns of residents, they have been only applied to hospitalization and need further evaluation. This study aimed to evaluate geographic indices describing health care utilization. We measured the utilization rate and four health care utilization indices (localization index, outflow index, inflow index, and net patient flow) for eight major procedures (coronary artery bypass graft surgery, percutaneous transluminal coronary angioplasty, surgery after hip fracture, knee replacement surgery, caesarean sections, hysterectomy, computed tomography scans, and magnetic resonance imaging scans) according to three levels of geographic units in Korea. Data were obtained from the National Health Insurance database in Korea. We evaluated the associations among the health care utilization indices and the utilization rates. In higher-level geographic units, the localization index tended to be high, while the inflow index and outflow index were lower. The indices showed different patterns depending on the procedure. A strong negative correlation between the localization index and the outflow index was observed for all procedures. Net patient flow showed a moderate positive correlation with the localization index and the inflow index. Health care utilization indices can be used as a proxy to describe the utilization pattern of a procedure in a geographic unit.

  17. Evaluation of Geographic Indices Describing Health Care Utilization

    PubMed Central

    Park, Jong Heon

    2017-01-01

    Objectives The accurate measurement of geographic patterns of health care utilization is a prerequisite for the study of geographic variations in health care utilization. While several measures have been developed to measure how accurately geographic units reflect the health care utilization patterns of residents, they have been only applied to hospitalization and need further evaluation. This study aimed to evaluate geographic indices describing health care utilization. Methods We measured the utilization rate and four health care utilization indices (localization index, outflow index, inflow index, and net patient flow) for eight major procedures (coronary artery bypass graft surgery, percutaneous transluminal coronary angioplasty, surgery after hip fracture, knee replacement surgery, caesarean sections, hysterectomy, computed tomography scans, and magnetic resonance imaging scans) according to three levels of geographic units in Korea. Data were obtained from the National Health Insurance database in Korea. We evaluated the associations among the health care utilization indices and the utilization rates. Results In higher-level geographic units, the localization index tended to be high, while the inflow index and outflow index were lower. The indices showed different patterns depending on the procedure. A strong negative correlation between the localization index and the outflow index was observed for all procedures. Net patient flow showed a moderate positive correlation with the localization index and the inflow index. Conclusions Health care utilization indices can be used as a proxy to describe the utilization pattern of a procedure in a geographic unit. PMID:28173689

  18. Describing Comprehension: Teachers' Observations of Students' Reading Comprehension

    ERIC Educational Resources Information Center

    Vander Does, Susan Lubow

    2012-01-01

    Teachers' observations of student performance in reading are abundant and insightful but often remain internal and unarticulated. As a result, such observations are an underutilized and undervalued source of data. Given the gaps in knowledge about students' reading comprehension that exist in formal assessments, the frequent calls for teachers'…

  19. The importance and attainment of accurate absolute radiometric calibration

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1984-01-01

    The importance of accurate absolute radiometric calibration is discussed by reference to the needs of those wishing to validate or use models describing the interaction of electromagnetic radiation with the atmosphere and earth surface features. The in-flight calibration methods used for the Landsat Thematic Mapper (TM) and the Systeme Probatoire d'Observation de la Terre, Haute Resolution visible (SPOT/HRV) systems are described and their limitations discussed. The questionable stability of in-flight absolute calibration methods suggests the use of a radiative transfer program to predict the apparent radiance, at the entrance pupil of the sensor, of a ground site of measured reflectance imaged through a well characterized atmosphere. The uncertainties of such a method are discussed.

  20. CRITICAL ELEMENTS IN DESCRIBING AND UNDERSTANDING OUR NATION'S AQUATIC RESOURCES

    EPA Science Inventory

    Despite spending $115 billion per year on environmental actions in the United States, we have only a limited ability to describe the effectiveness of these expenditures. Moreover, after decades of such investments, we cannot accurately describe status and trends in the nation's a...

  1. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  2. Toward an Accurate Theoretical Framework for Describing Ensembles for Proteins under Strongly Denaturing Conditions

    PubMed Central

    Tran, Hoang T.; Pappu, Rohit V.

    2006-01-01

    Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618

  3. The sign language skills classroom observation: a process for describing sign language proficiency in classroom settings.

    PubMed

    Reeves, J B; Newell, W; Holcomb, B R; Stinson, M

    2000-10-01

    In collaboration with teachers and students at the National Technical Institute for the Deaf (NTID), the Sign Language Skills Classroom Observation (SLSCO) was designed to provide feedback to teachers on their sign language communication skills in the classroom. In the present article, the impetus and rationale for development of the SLSCO is discussed. Previous studies related to classroom signing and observation methodology are reviewed. The procedure for developing the SLSCO is then described. This procedure included (a) interviews with faculty and students at NTID, (b) identification of linguistic features of sign language important for conveying content to deaf students, (c) development of forms for recording observations of classroom signing, (d) analysis of use of the forms, (e) development of a protocol for conducting the SLSCO, and (f) piloting of the SLSCO in classrooms. The results of use of the SLSCO with NTID faculty during a trial year are summarized.

  4. Analysis of underwater radiance observations: Apparent optical properties and analytic functions describing the angular radiance distribution

    NASA Astrophysics Data System (ADS)

    Aas, Eyvind; HøJerslev, Niels K.

    1999-04-01

    A primary data set consisting of 70 series of angular radiance distributions observed in clear blue western Mediterranean water and a secondary set of 12 series from the more green and turbid Lake Pend Oreille, Idaho, have been analyzed. The results demonstrate that the main variation of the shape of the downward radiance distribution occurs within the Snell cone. Outside the cone the variation of the shape decreases with increasing zenith angle. The most important shape changes of the upward radiance appear within the zenith angle range 90°-130°. The variation in shape reaches its minimum around nadir, where an almost constant upward radiance distribution implies that a flat sea surface acts like a Lambert emitter within ±8% in the zenith angle interval 140°-180° in air. The ratio Q of upward irradiance and nadir radiance, as well as the average cosines μd and μu for downward and upward radiance, respectively, have rather small standard deviations, ≤10%, within the local water type. In contrast, the irradiance reflectance R has been observed to change up to 400% with depth in the western Mediterranean, while the maximum observed change of Q with depth is only 40%. The dependence of Q on the solar elevation for blue light at 5 m depth in the Mediterranean coincides with observations from the central Atlantic as well as with model computations. The corresponding dependence of μd shows that diffuse light may have a significant influence on its value. Two simple functions describing the observed angular radiance distributions are proposed, and both functions can be determined by two field observations as input parameters. The ɛ function approximates the azimuthal means of downward radiance with an average error ≤7% and of upward radiance with an error of ˜1%. The α function describes the zenith angle dependence of the azimuthal means of upward radiance with an average error ≤7% in clear ocean water, increasing to ≤20% in turbid lake water. The a

  5. Seeing and Being Seen: Predictors of Accurate Perceptions about Classmates’ Relationships

    PubMed Central

    Neal, Jennifer Watling; Neal, Zachary P.; Cappella, Elise

    2015-01-01

    This study examines predictors of observer accuracy (i.e. seeing) and target accuracy (i.e. being seen) in perceptions of classmates’ relationships in a predominantly African American sample of 420 second through fourth graders (ages 7 – 11). Girls, children in higher grades, and children in smaller classrooms were more accurate observers. Targets (i.e. pairs of children) were more accurately observed when they occurred in smaller classrooms of higher grades and involved same-sex, high-popularity, and similar-popularity children. Moreover, relationships between pairs of girls were more accurately observed than relationships between pairs of boys. As a set, these findings suggest the importance of both observer and target characteristics for children’s accurate perceptions of classroom relationships. Moreover, the substantial variation in observer accuracy and target accuracy has methodological implications for both peer-reported assessments of classroom relationships and the use of stochastic actor-based models to understand peer selection and socialization processes. PMID:26347582

  6. Systematic approach for describing the geometry of spectrophotometry

    NASA Astrophysics Data System (ADS)

    Early, Edward A.

    2003-07-01

    In the field of spectrophotometry, the value of the quantities depends upon the geometry under which they are measured. Therefore, it is imperative to completely describe the measurement geometry. Many documentary standards specify the geometry for a particular application. However, to accurately specify the geometry, a general, basic understanding of the relevant parameters for describing the geometry is required. A systematic approach for describing the measurement geometry is presented, which will hopefully have a positive impact on documentary standards. The key to describing the geometry is to consider the illuminator and receiver of the instrument as optical systems with pupils and windows. It is these optical systems, together with the reference plane, that determine the sampling aperture of the instrument. The geometry is then completely described by the relations between the sampling aperture and the optical systems of the illuminator and receiver. These concepts are illustrated by considering three configurations of pupils and windows relative to the focal point of an optical system.

  7. When continuous observations just won't do: developing accurate and efficient sampling strategies for the laying hen.

    PubMed

    Daigle, Courtney L; Siegford, Janice M

    2014-03-01

    Continuous observation is the most accurate way to determine animals' actual time budget and can provide a 'gold standard' representation of resource use, behavior frequency, and duration. Continuous observation is useful for capturing behaviors that are of short duration or occur infrequently. However, collecting continuous data is labor intensive and time consuming, making multiple individual or long-term data collection difficult. Six non-cage laying hens were video recorded for 15 h and behavioral data collected every 2 s were compared with data collected using scan sampling intervals of 5, 10, 15, 30, and 60 min and subsamples of 2 second observations performed for 10 min every 30 min, 15 min every 1 h, 30 min every 1.5 h, and 15 min every 2 h. Three statistical approaches were used to provide a comprehensive analysis to examine the quality of the data obtained via different sampling methods. General linear mixed models identified how the time budget from the sampling techniques differed from continuous observation. Correlation analysis identified how strongly results from the sampling techniques were associated with those from continuous observation. Regression analysis identified how well the results from the sampling techniques were associated with those from continuous observation, changes in magnitude, and whether a sampling technique had bias. Static behaviors were well represented with scan and time sampling techniques, while dynamic behaviors were best represented with time sampling techniques. Methods for identifying an appropriate sampling strategy based upon the type of behavior of interest are outlined and results for non-caged laying hens are presented. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Accurate mass measurement by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. II. Measurement of negative radical ions using porphyrin and fullerene standard reference materials.

    PubMed

    Shao, Zhecheng; Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth

    2010-10-30

    A method for the accurate mass measurement of negative radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. This is an extension to our previously described method for the accurate mass measurement of positive radical ions (Griffiths NW, Wyatt MF, Kean SD, Graham AE, Stein BK, Brenton AG. Rapid Commun. Mass Spectrom. 2010; 24: 1629). The porphyrin standard reference materials (SRMs) developed for positive mode measurements cannot be observed in negative ion mode, so fullerene and fluorinated porphyrin compounds were identified as effective SRMs. The method is of immediate practical use for the accurate mass measurement of functionalised fullerenes, for which negative ion MALDI-TOFMS is the principal mass spectrometry characterisation technique. This was demonstrated by the accurate mass measurement of six functionalised C(60) compounds. Copyright © 2010 John Wiley & Sons, Ltd.

  9. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  10. A picture's worth a thousand words: a food-selection observational method.

    PubMed

    Carins, Julia E; Rundle-Thiele, Sharyn R; Parkinson, Joy E

    2016-05-04

    Issue addressed: Methods are needed to accurately measure and describe behaviour so that social marketers and other behaviour change researchers can gain consumer insights before designing behaviour change strategies and so, in time, they can measure the impact of strategies or interventions when implemented. This paper describes a photographic method developed to meet these needs. Methods: Direct observation and photographic methods were developed and used to capture food-selection behaviour and examine those selections according to their healthfulness. Four meals (two lunches and two dinners) were observed at a workplace buffet-style cafeteria over a 1-week period. The healthfulness of individual meals was assessed using a classification scheme developed for the present study and based on the Australian Dietary Guidelines. Results: Approximately 27% of meals (n = 168) were photographed. Agreement was high between raters classifying dishes using the scheme, as well as between researchers when coding photographs. The subset of photographs was representative of patterns observed in the entire dining room. Diners chose main dishes in line with the proportions presented, but in opposition to the proportions presented for side dishes. Conclusions: The present study developed a rigorous observational method to investigate food choice behaviour. The comprehensive food classification scheme produced consistent classifications of foods. The photographic data collection method was found to be robust and accurate. Combining the two observation methods allows researchers and/or practitioners to accurately measure and interpret food selections. Consumer insights gained suggest that, in this setting, increasing the availability of green (healthful) offerings for main dishes would assist in improving healthfulness, whereas other strategies (e.g. promotion) may be needed for side dishes. So what?: Visual observation methods that accurately measure and interpret food

  11. How accurate is unenhanced multidetector-row CT (MDCT) for localization of renal calculi?

    PubMed

    Goetschi, Stefan; Umbehr, Martin; Ullrich, Stephan; Glenck, Michael; Suter, Stefan; Weishaupt, Dominik

    2012-11-01

    To investigate the correlation between unenhanced MDCT and intraoperative findings with regard to the exact anatomical location of renal calculi. Fifty-nine patients who underwent unenhanced MDCT for suspected urinary stone disease, and who underwent subsequent flexible ureterorenoscopy (URS) as treatment of nephrolithiasis were included in this retrospective study. All MDCT data sets were independently reviewed by three observers with different degrees of experience in reading CT. Each observer was asked to indicate presence and exact anatomical location of any calcification within pyelocaliceal system, renal papilla or renal cortex. Results were compared to intraoperative findings which have been defined as standard of reference. Calculi not described at surgery, but present on MDCT data were counted as renal cortex calcifications. Overall 166 calculi in 59 kidneys have been detected on MDCT, 100 (60.2%) were located in the pyelocaliceal system and 66 (39.8%) in the renal parenchyma. Of the 100 pyelocaliceal calculi, 84 (84%) were correctly located on CT data sets by observer 1, 62 (62%) by observer 2, and 71 (71%) by observer 3. Sensitivity/specificity was 90-94% and 50-100% if only pyelocaliceal calculi measuring >4 mm in size were considered. For pyelocaliceal calculi≤4 mm in size diagnostic performance of MDCT was inferior. Compared to flexible URS, unenhanced MDCT is accurate for distinction between pyelocaliceal calculi and renal parenchyma calcifications if renal calculi are >4 mm in size. For smaller renal calculi, unenhanced MDCT is less accurate and distinction between a pyelocaliceal calculus and renal parenchyma calcification is difficult. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  13. How Clean Are Hotel Rooms? Part I: Visual Observations vs. Microbiological Contamination.

    PubMed

    Almanza, Barbara A; Kirsch, Katie; Kline, Sheryl Fried; Sirsat, Sujata; Stroia, Olivia; Choi, Jin Kyung; Neal, Jay

    2015-01-01

    Current evidence of hotel room cleanliness is based on observation rather than empirically based microbial assessment. The purpose of the study described here was to determine if observation provides an accurate indicator of cleanliness. Results demonstrated that visual assessment did not accurately predict microbial contamination. Although testing standards have not yet been established for hotel rooms and will be evaluated in Part II of the authors' study, potential microbial hazards included the sponge and mop (housekeeping cart), toilet, bathroom floor, bathroom sink, and light switch. Hotel managers should increase cleaning in key areas to reduce guest exposure to harmful bacteria.

  14. Optical Chopper Assembly for the Mars Observer

    NASA Technical Reports Server (NTRS)

    Allen, Terry

    1993-01-01

    This paper describes the Honeywell-developed Optical Chopper Assembly (OCA), a component of Mars Observer spacecraft's Pressure Modulator Infrared Radiometer (PMIRR) science experiment, which will map the Martian atmosphere during 1993 to 1995. The OCA is unique because of its constant accurate rotational speed, low electrical power consumption, and long-life requirements. These strict and demanding requirements were achieved by use of a number of novel approaches.

  15. Foresight begins with FMEA. Delivering accurate risk assessments.

    PubMed

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  16. Observer Use of Standardized Observation Protocols in Consequential Observation Systems

    ERIC Educational Resources Information Center

    Bell, Courtney A.; Yi, Qi; Jones, Nathan D.; Lewis, Jennifer M.; McLeod, Monica; Liu, Shuangshuang

    2014-01-01

    Evidence from a handful of large-scale studies suggests that although observers can be trained to score reliably using observation protocols, there are concerns related to initial training and calibration activities designed to keep observers scoring accurately over time (e.g., Bell, et al, 2012; BMGF, 2012). Studies offer little insight into how…

  17. CLARREO Cornerstone of the Earth Observing System: Measuring Decadal Change Through Accurate Emitted Infrared and Reflected Solar Spectra and Radio Occultation

    NASA Technical Reports Server (NTRS)

    Sandford, Stephen P.

    2010-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is one of four Tier 1 missions recommended by the recent NRC Decadal Survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that are used to enable validated long-term climate projections that become the foundation for informed decisions on mitigation and adaptation policies that address the effects of climate change on society. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change observations that are sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. These same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. For the first time CLARREO will make highly accurate, global, SI-traceable decadal change observations sensitive to the most critical, but least understood, climate forcings, responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy levels are determined so that climate trend signals can be detected against a background of naturally occurring variability. Climate system natural variability therefore determines what level of accuracy is overkill, and what level is critical to obtain. In this sense, the CLARREO mission requirements are considered optimal from a science value perspective. The accuracy for decadal change traceability to SI standards includes uncertainties associated with instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in

  18. Describing the observed cosmic neutrinos by interactions of nuclei with matter

    NASA Astrophysics Data System (ADS)

    Winter, Walter

    2014-11-01

    IceCube has observed neutrinos that are presumably of extra-Galactic origin. Since specific sources have not yet been identified, we discuss what could be learned from the conceptual point of view. We use a simple model for neutrino production from the interactions between nuclei and matter, and we focus on the description of the spectral shape and flavor composition observed by IceCube. Our main parameters are the spectral index, maximal energy, magnetic field, and composition of the accelerated nuclei. We show that a cutoff at PeV energies can be achieved by soft enough spectra, a cutoff of the primary energy, or strong enough magnetic fields. These options, however, are difficult to reconcile with the hypothesis that these neutrinos originate from the same sources as the ultrahigh-energy cosmic rays. We demonstrate that heavier nuclei accelerated in the sources may be a possible way out if the maximal energy scales appropriately with the mass number of the nuclei. In this scenario, neutrino observations can actually be used to test the ultrahigh-energy cosmic ray acceleration mechanism. We also emphasize the need for a volume upgrade of the IceCube detector for future precision physics, for which the flavor information becomes a statistically meaningful model discriminator as well as a qualitatively new ingredient.

  19. The CC/DFT Route towards Accurate Structures and Spectroscopic Features for Observed and Elusive Conformers of Flexible Molecules: Pyruvic Acid as Case Study

    PubMed Central

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Cimino, Paola; Penocchio, Emanuele; Puzzarini, Cristina

    2018-01-01

    The structures, relative stabilities as well as the rotational and vibrational spectra of the three low-energy conformers of Pyruvic acid (PA) have been characterized using a state-of-the-art quantum-mechanical approach designed for flexible molecules. By making use of the available experimental rotational constants for several isotopologues of the most stable PA conformer, Tc-PA, the semi-experimental equilibrium structure has been derived. The latter provides a reference for the pure theoretical determination of the equilibrium geometries for all conformers, thus confirming for these structures an accuracy of 0.001 Å and 0.1 deg. for bond lengths and angles, respectively. Highly accurate relative energies of all conformers (Tc-, Tt- and Ct-PA) and of the transition states connecting them are provided along with the thermodynamic properties at low and high temperatures, thus leading to conformational enthalpies accurate to 1 kJ mol−1. Concerning microwave spectroscopy, rotational constants accurate to about 20 MHz are provided for the Tt- and Ct-PA conformers, together with the computed centrifugal-distortion constants and dipole moments required to simulate their rotational spectra. For Ct-PA, vibrational frequencies in the mid-infrared region accurate to 10 cm−1 are reported along with theoretical estimates for the transitions in the near-infrared range, and the corresponding infrared spectrum including fundamental transitions, overtones and combination bands has been simulated. In addition to the new data described above, theoretical results for the Tc- and Tt-PA conformers are compared with all available experimental data to further confirm the accuracy of the hybrid coupled-cluster/density functional theory (CC/DFT) protocol applied in the present study. Finally, we discuss in detail the accuracy of computational models fully based on double-hybrid DFT functionals (mainly at the B2PLYP/aug-cc-pVTZ level) that avoid the use of very expensive CC

  20. An Observation Capability Semantic-Associated Approach to the Selection of Remote Sensing Satellite Sensors: A Case Study of Flood Observations in the Jinsha River Basin

    PubMed Central

    Hu, Chuli; Li, Jie; Lin, Xin

    2018-01-01

    Observation schedules depend upon the accurate understanding of a single sensor’s observation capability and the interrelated observation capability information on multiple sensors. The general ontologies for sensors and observations are abundant. However, few observation capability ontologies for satellite sensors are available, and no study has described the dynamic associations among the observation capabilities of multiple sensors used for integrated observational planning. This limitation results in a failure to realize effective sensor selection. This paper develops a sensor observation capability association (SOCA) ontology model that is resolved around the task-sensor-observation capability (TSOC) ontology pattern. The pattern is developed considering the stimulus-sensor-observation (SSO) ontology design pattern, which focuses on facilitating sensor selection for one observation task. The core aim of the SOCA ontology model is to achieve an observation capability semantic association. A prototype system called SemOCAssociation was developed, and an experiment was conducted for flood observations in the Jinsha River basin in China. The results of this experiment verified that the SOCA ontology based association method can help sensor planners intuitively and accurately make evidence-based sensor selection decisions for a given flood observation task, which facilitates efficient and effective observational planning for flood satellite sensors. PMID:29883425

  1. An Observation Capability Semantic-Associated Approach to the Selection of Remote Sensing Satellite Sensors: A Case Study of Flood Observations in the Jinsha River Basin.

    PubMed

    Hu, Chuli; Li, Jie; Lin, Xin; Chen, Nengcheng; Yang, Chao

    2018-05-21

    Observation schedules depend upon the accurate understanding of a single sensor’s observation capability and the interrelated observation capability information on multiple sensors. The general ontologies for sensors and observations are abundant. However, few observation capability ontologies for satellite sensors are available, and no study has described the dynamic associations among the observation capabilities of multiple sensors used for integrated observational planning. This limitation results in a failure to realize effective sensor selection. This paper develops a sensor observation capability association (SOCA) ontology model that is resolved around the task-sensor-observation capability (TSOC) ontology pattern. The pattern is developed considering the stimulus-sensor-observation (SSO) ontology design pattern, which focuses on facilitating sensor selection for one observation task. The core aim of the SOCA ontology model is to achieve an observation capability semantic association. A prototype system called SemOCAssociation was developed, and an experiment was conducted for flood observations in the Jinsha River basin in China. The results of this experiment verified that the SOCA ontology based association method can help sensor planners intuitively and accurately make evidence-based sensor selection decisions for a given flood observation task, which facilitates efficient and effective observational planning for flood satellite sensors.

  2. Memory conformity affects inaccurate memories more than accurate memories.

    PubMed

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  3. Lunar occultation of Saturn. IV - Astrometric results from observations of the satellites

    NASA Technical Reports Server (NTRS)

    Dunham, D. W.; Elliot, J. L.

    1978-01-01

    The method of determining local lunar limb slopes, and the consequent time scale needed for diameter studies, from accurate occultation timings at two nearby telescopes is described. Results for photoelectric observations made at Mauna Kea Observatory during the occultation of Saturn's satellites on March 30, 1974, are discussed. Analysis of all observations of occultations of Saturn's satellites during 1974 indicates possible errors in the ephemerides of Saturn and its satellites.

  4. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE PAGES

    An, Zhe; Rey, Daniel; Ye, Jingxin; ...

    2017-01-16

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  5. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Zhe; Rey, Daniel; Ye, Jingxin

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  6. Highly accurate nephelometric titrimetry.

    PubMed

    Zhan, Xiancheng; Li, Chengrong; Li, Zhiyi; Yang, Xiucen; Zhong, Shuguang; Yi, Tao

    2004-02-01

    A method that accurately indicates the end-point of precipitation reactions by the measurement of the relative intensity of the scattered light in the titrate is presented. A new nephelometric titrator with an internal nephelometric sensor has been devised. The work of the titrator including the sensor and change in the turbidity of the titrate and intensity of the scattered light are described. The accuracy of the nephelometric titrimetry is discussed theoretically. The titration of NaCl with AgNO(3) serves as a model. A relative error as well as deviation is within 0.2% under the experimental conditions. The applicability of the titrimetry in pharmaceutical analyses, for example, phenytoin sodium and procaine hydrochloride, is generally illustrated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  8. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  9. Toward more accurate loss tangent measurements in reentrant cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moyer, R. D.

    1980-05-01

    Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.

  10. High-accurate optical fiber liquid level sensor

    NASA Astrophysics Data System (ADS)

    Sun, Dexing; Chen, Shouliu; Pan, Chao; Jin, Henghuan

    1991-08-01

    A highly accurate optical fiber liquid level sensor is presented. The single-chip microcomputer is used to process and control the signal. This kind of sensor is characterized by self-security and is explosion-proof, so it can be applied in any liquid level detecting areas, especially in the oil and chemical industries. The theories and experiments about how to improve the measurement accuracy are described. The relative error for detecting the measurement range 10 m is up to 0.01%.

  11. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  12. Accurate inspiral-merger-ringdown gravitational waveforms for nonspinning black-hole binaries including the effect of subdominant modes

    NASA Astrophysics Data System (ADS)

    Mehta, Ajit Kumar; Mishra, Chandra Kant; Varma, Vijay; Ajith, Parameswaran

    2017-12-01

    We present an analytical waveform family describing gravitational waves (GWs) from the inspiral, merger, and ringdown of nonspinning black-hole binaries including the effect of several nonquadrupole modes [(ℓ=2 ,m =±1 ),(ℓ=3 ,m =±3 ),(ℓ=4 ,m =±4 ) apart from (ℓ=2 ,m =±2 )]. We first construct spin-weighted spherical harmonics modes of hybrid waveforms by matching numerical-relativity simulations (with mass ratio 1-10) describing the late inspiral, merger, and ringdown of the binary with post-Newtonian/effective-one-body waveforms describing the early inspiral. An analytical waveform family is constructed in frequency domain by modeling the Fourier transform of the hybrid waveforms making use of analytical functions inspired by perturbative calculations. The resulting highly accurate, ready-to-use waveforms are highly faithful (unfaithfulness ≃10-4- 10-2 ) for observation of GWs from nonspinning black-hole binaries and are extremely inexpensive to generate.

  13. In situ accurate determination of the zero time delay between two independent ultrashort laser pulses by observing the oscillation of an atomic excited wave packet.

    PubMed

    Zhang, Qun; Hepburn, John W

    2008-08-15

    We propose a novel method that uses the oscillation of an atomic excited wave packet observed through a pump-probe technique to accurately determine the zero time delay between a pair of ultrashort laser pulses. This physically based approach provides an easy fix for the intractable problem of synchronizing two different femtosecond laser pulses in a practical experimental environment, especially where an in situ time zero measurement with high accuracy is required.

  14. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    NASA Astrophysics Data System (ADS)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various

  15. Determination of accurate vertical atmospheric profiles of extinction and turbulence

    NASA Astrophysics Data System (ADS)

    Hammel, Steve; Campbell, James; Hallenborg, Eric

    2017-09-01

    Our ability to generate an accurate vertical profile characterizing the atmosphere from the surface to a point above the boundary layer top is quite rudimentary. The region from a land or sea surface to an altitude of 3000 meters is dynamic and particularly important to the performance of many active optical systems. Accurate and agile instruments are necessary to provide measurements in various conditions, and models are needed to provide the framework and predictive capability necessary for system design and optimization. We introduce some of the path characterization instruments and describe the first work to calibrate and validate them. Along with a verification of measurement accuracy, the tests must also establish each instruments performance envelope. Measurement of these profiles in the field is a problem, and we will present a discussion of recent field test activity to address this issue. The Comprehensive Atmospheric Boundary Layer Extinction/Turbulence Resolution Analysis eXperiment (CABLE/TRAX) was conducted late June 2017. There were two distinct objectives for the experiment: 1) a comparison test of various scintillometers and transmissometers on a homogeneous horizontal path; 2) a vertical profile experiment. In this paper we discuss only the vertical profiling effort, and we describe the instruments that generated data for vertical profiles of absorption, scattering, and turbulence. These three profiles are the core requirements for an accurate assessment of laser beam propagation.

  16. Observational methods in comparative effectiveness research.

    PubMed

    Concato, John; Lawler, Elizabeth V; Lew, Robert A; Gaziano, J Michael; Aslan, Mihaela; Huang, Grant D

    2010-12-01

    Comparative effectiveness research (CER) may be defined informally as an assessment of available options for treating specific medical conditions in selected groups of patients. In this context, the most prominent features of CER are the various patient populations, medical ailments, and treatment options involved in any particular project. Yet, each research investigation also has a corresponding study design or "architecture," and in patient-oriented research a common distinction used to describe such designs are randomized controlled trials (RCTs) versus observational studies. The purposes of this overview, with regard to CER, are to (1) understand how observational studies can provide accurate results, comparable to RCTs; (2) recognize strategies used in selected newer methods for conducting observational studies; (3) review selected observational studies from the Veterans Health Administration; and (4) appreciate the importance of fundamental methodological principles when conducting or evaluating individual studies. Published by Elsevier Inc.

  17. Confidence Region of Least Squares Solution for Single-Arc Observations

    NASA Astrophysics Data System (ADS)

    Principe, G.; Armellin, R.; Lewis, H.

    2016-09-01

    The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.

  18. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  19. Generating Accurate Urban Area Maps from Nighttime Satellite (DMSP/OLS) Data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc; Lawrence, William; Elvidge, Christopher

    2000-01-01

    There has been an increasing interest by the international research community to use the nighttime acquired "city-lights" data sets collected by the US Defense Meteorological Satellite Program's Operational Linescan system to study issues relative to urbanization. Many researchers are interested in using these data to estimate human demographic parameters over large areas and then characterize the interactions between urban development , natural ecosystems, and other aspects of the human enterprise. Many of these attempts rely on an ability to accurately identify urbanized area. However, beyond the simple determination of the loci of human activity, using these data to generate accurate estimates of urbanized area can be problematic. Sensor blooming and registration error can cause large overestimates of urban land based on a simple measure of lit area from the raw data. We discuss these issues, show results of an attempt to do a historical urban growth model in Egypt, and then describe a few basic processing techniques that use geo-spatial analysis to threshold the DMSP data to accurately estimate urbanized areas. Algorithm results are shown for the United States and an application to use the data to estimate the impact of urban sprawl on sustainable agriculture in the US and China is described.

  20. Accurate energy levels for singly ionized platinum (Pt II)

    NASA Technical Reports Server (NTRS)

    Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Engleman, Rolf, Jr.

    1988-01-01

    New observations of the spectrum of Pt II have been made with hollow-cathode lamps. The region from 1032 to 4101 A was observed photographically with a 10.7-m normal-incidence spectrograph. The region from 2245 to 5223 A was observed with a Fourier-transform spectrometer. Wavelength measurements were made for 558 lines. The uncertainties vary from 0.0005 to 0.004 A. From these measurements and three parity-forbidden transitions in the infrared, accurate values were determined for 28 even and 72 odd energy levels of Pt II.

  1. Accurate atomistic first-principles calculations of electronic stopping

    DOE PAGES

    Schleife, André; Kanai, Yosuke; Correa, Alfredo A.

    2015-01-20

    In this paper, we show that atomistic first-principles calculations based on real-time propagation within time-dependent density functional theory are capable of accurately describing electronic stopping of light projectile atoms in metal hosts over a wide range of projectile velocities. In particular, we employ a plane-wave pseudopotential scheme to solve time-dependent Kohn-Sham equations for representative systems of H and He projectiles in crystalline aluminum. This approach to simulate nonadiabatic electron-ion interaction provides an accurate framework that allows for quantitative comparison with experiment without introducing ad hoc parameters such as effective charges, or assumptions about the dielectric function. Finally, our work clearlymore » shows that this atomistic first-principles description of electronic stopping is able to disentangle contributions due to tightly bound semicore electrons and geometric aspects of the stopping geometry (channeling versus off-channeling) in a wide range of projectile velocities.« less

  2. The accurate assessment of small-angle X-ray scattering data

    DOE PAGES

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  3. Detailed observations of the source of terrestrial narrowband electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.

    1982-01-01

    Detailed observations are presented of a region near the terrestrial plasmapause where narrowband electromagnetic radiation (previously called escaping nonthermal continuum radiation) is being generated. These observations show a direct correspondence between the narrowband radio emissions and electron cyclotron harmonic waves near the upper hybrid resonance frequency. In addition, electromagnetic radiation propagating in the Z-mode is observed in the source region which provides an extremely accurate determination of the electron plasma frequency and, hence, density profile of the source region. The data strongly suggest that electrostatic waves and not Cerenkov radiation are the source of the banded radio emissions and define the coupling which must be described by any viable theory.

  4. Particle-in-cell Simulations of Waves in a Plasma Described by Kappa Velocity Distribution as Observed in the Saturńs Magnetosphere

    NASA Astrophysics Data System (ADS)

    Alves, M. V.; Barbosa, M. V. G.; Simoes, F. J. L., Jr.

    2016-12-01

    Observations have shown that several regions in space plasmas exhibit non-Maxwellian distributions with high energy superthermal tails. Kappa velocity distribution functions can describe many of these regions and have been used since the 60's. They suit well to represent superthermal tails in solar wind as well as to obtain plasma parameters of plasma within planetary magnetospheres. A set of initial velocities following kappa distribution functions is used in KEMPO1 particle simulation code to analyze the normal modes of wave propagation. Initial conditions are determined using observed characteristics for Saturńs magnetosphere. Two electron species with different temperatures and densities and ions as a third species are used. Each electron population is described by a different kappa index. Particular attention is given to perpendicular propagation, Bernstein modes, and parallel propagation, Langmuir and electron-acoustic modes. The dispersion relation for the Bernstein modes is strongly influenced by the shape of the velocity distribution and consequently by the value of kappa index. Simulation results are compared with numerical solutions of the dispersion relation obtained in the literature and they are in good agreement.

  5. Extragalactic radio sources - Accurate positions from very-long-baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Counselman, C. C., III; Hinteregger, H. F.; Knight, C. A.; Robertson, D. S.; Shapiro, I. I.; Whitney, A. R.; Clark, T. A.

    1973-01-01

    Relative positions for 12 extragalactic radio sources have been determined via wide-band very-long-baseline interferometry (wavelength of about 3.8 cm). The standard error, based on consistency between results from widely separated periods of observation, appears to be no more than 0.1 sec for each coordinate of the seven sources that were well observed during two or more periods. The uncertainties in the coordinates determined for the other five sources are larger, but in no case exceed 0.5 sec.

  6. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  7. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  8. A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Stamer, Torsten; Inutsuka, Shu-ichiro

    2018-06-01

    We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.

  9. Highly accurate FTIR observations from the scanning HIS aircraft instrument

    NASA Astrophysics Data System (ADS)

    Revercomb, Henry E.; Tobin, David C.; Knuteson, Robert O.; Best, Fred A.; Smith, William L., Sr.; van Delst, Paul F. W.; LaPorte, Daniel D.; Ellington, Scott D.; Werner, Mark W.; Dedecker, Ralph G.; Garcia, Raymond K.; Ciganovich, Nick N.; Howell, Hugh B.; Olson, Erik R.; Dutcher, Steven B.; Taylor, Joseph K.

    2005-01-01

    Development in the mid 80s of the High-resolution Interferometer Sounder (HIS) instrument for the high altitude NASA ER2 aircraft demonstrated the capability for advanced atmospheric temperature and water vapor sounding and set the stage for new satellite instruments that are now becoming a reality [AIRS(2002), CrIS(2006), IASI(2006), GIFTS(200?), HES(2013)]. Follow-on developments at the University of Wisconsin that employ Fourier Transform Infrared (FTIR) for Earth observations include the ground-based Atmospheric Emitted Radiance Interferometer (AERI) and the new Scanning HIS aircraft instrument. The Scanning HIS is a smaller version of the original HIS that uses cross-track scanning to enhance spatial coverage. Scanning HIS and its close cousin, the NPOESS Airborne Sounder Testbed (NAST), are being used for satellite instrument validation and for atmospheric research. A novel detector configuration on Scanning HIS allows the incorporation of a single focal plane and cooler with three or four spectral bands that view the same spot on the ground. The calibration accuracy of the S-HIS and results from recent field campaigns are presented, including validation comparisons with the NASA EOS infrared observations (AIRS and MODIS). Aircraft comparisons of this type provide a mechanism for periodically testing the absolute calibration of spacecraft instruments with instrumentation for which the calibration can be carefully maintained on the ground. This capability is especially valuable for assuring the long-term consistency and accuracy of climate observations, including those from the NASA EOS spacecrafts (Terra, Aqua and Aura) and the new complement of NPOESS operational instruments. It is expected that aircraft flights of the S-HIS and the NAST will be used to check the long-term stability of AIRS and the NPOESS operational follow-on sounder, the Cross-track Infrared Sounder (CrIS), over the life of the mission.

  10. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  11. The calculation of accurate harmonic frequencies of large molecules: the polycyclic aromatic hydrocarbons, a case study

    NASA Astrophysics Data System (ADS)

    Bauschlicher, Charles W.; Langhoff, Stephen R.

    1997-07-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Møller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral polycyclic aromatic hydrocarbons (PAHs) where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  12. LiveDescribe: Can Amateur Describers Create High-Quality Audio Description?

    ERIC Educational Resources Information Center

    Branje, Carmen J.; Fels, Deborah I.

    2012-01-01

    Introduction: The study presented here evaluated the usability of the audio description software LiveDescribe and explored the acceptance rates of audio description created by amateur describers who used LiveDescribe to facilitate the creation of their descriptions. Methods: Twelve amateur describers with little or no previous experience with…

  13. Observation des cycles enzymatiques des ADN topoisomérases par micromanipulation de molécules individuelles

    NASA Astrophysics Data System (ADS)

    Strick, Terence R.; Charvin, Gilles; Dekker, Nynke H.; Allemand, Jean-François; Bensimon, David; Croquette, Vincent

    In this article, we describe single-molecule assays using magnetic traps and we applied these assays to topoisomerase enzymes which unwind and disentangle DNA molecules. First, the elasticity of single DNA molecule is characterized using the magnetic trap. We show that a twisting constraint may be easily applied and that its effect upon DNA may be measured accurately. Then we describe how the topoisomerase activity may be observed at the single-molecule level giving direct access to the important biological parameters of the enzyme such as velocity and processivity. Furthermore, individual cycles of unwinding can be observed in real time. This permits an accurate characterization of the enzyme's biochemical cycle. The data treatment required to identify and analyze individual topoisomerization cycles will be presented in detail. This analysis is applicable to a wide variety of molecular motors. To cite this article: T.R. Strick et al., C. R. Physique 3 (2002) 595-618.

  14. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  15. A new coarse-grained model for E. coli cytoplasm: accurate calculation of the diffusion coefficient of proteins and observation of anomalous diffusion.

    PubMed

    Hasnain, Sabeeha; McClendon, Christopher L; Hsu, Monica T; Jacobson, Matthew P; Bandyopadhyay, Pradipta

    2014-01-01

    A new coarse-grained model of the E. coli cytoplasm is developed by describing the proteins of the cytoplasm as flexible units consisting of one or more spheres that follow Brownian dynamics (BD), with hydrodynamic interactions (HI) accounted for by a mean-field approach. Extensive BD simulations were performed to calculate the diffusion coefficients of three different proteins in the cellular environment. The results are in close agreement with experimental or previously simulated values, where available. Control simulations without HI showed that use of HI is essential to obtain accurate diffusion coefficients. Anomalous diffusion inside the crowded cellular medium was investigated with Fractional Brownian motion analysis, and found to be present in this model. By running a series of control simulations in which various forces were removed systematically, it was found that repulsive interactions (volume exclusion) are the main cause for anomalous diffusion, with a secondary contribution from HI.

  16. How accurately can the microcanonical ensemble describe small isolated quantum systems?

    NASA Astrophysics Data System (ADS)

    Ikeda, Tatsuhiko N.; Ueda, Masahito

    2015-08-01

    We numerically investigate quantum quenches of a nonintegrable hard-core Bose-Hubbard model to test the accuracy of the microcanonical ensemble in small isolated quantum systems. We show that, in a certain range of system size, the accuracy increases with the dimension of the Hilbert space D as 1 /D . We ascribe this rapid improvement to the absence of correlations between many-body energy eigenstates. Outside of that range, the accuracy is found to scale either as 1 /√{D } or algebraically with the system size.

  17. Accurate analytic solution of chemical master equations for gene regulation networks in a single cell

    NASA Astrophysics Data System (ADS)

    Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun

    2018-01-01

    Studying gene regulation networks in a single cell is an important, interesting, and hot research topic of molecular biology. Such process can be described by chemical master equations (CMEs). We propose a Hamilton-Jacobi equation method with finite-size corrections to solve such CMEs accurately at the intermediate region of switching, where switching rate is comparable to fast protein production rate. We applied this approach to a model of self-regulating proteins [H. Ge et al., Phys. Rev. Lett. 114, 078101 (2015), 10.1103/PhysRevLett.114.078101] and found that as a parameter related to inducer concentration increases the probability of protein production changes from unimodal to bimodal, then to unimodal, consistent with phenotype switching observed in a single cell.

  18. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  19. pyQms enables universal and accurate quantification of mass spectrometry data.

    PubMed

    Leufken, Johannes; Niehues, Anna; Sarin, L Peter; Wessel, Florian; Hippler, Michael; Leidel, Sebastian A; Fufezan, Christian

    2017-10-01

    Quantitative mass spectrometry (MS) is a key technique in many research areas (1), including proteomics, metabolomics, glycomics, and lipidomics. Because all of the corresponding molecules can be described by chemical formulas, universal quantification tools are highly desirable. Here, we present pyQms, an open-source software for accurate quantification of all types of molecules measurable by MS. pyQms uses isotope pattern matching that offers an accurate quality assessment of all quantifications and the ability to directly incorporate mass spectrometer accuracy. pyQms is, due to its universal design, applicable to every research field, labeling strategy, and acquisition technique. This opens ultimate flexibility for researchers to design experiments employing innovative and hitherto unexplored labeling strategies. Importantly, pyQms performs very well to accurately quantify partially labeled proteomes in large scale and high throughput, the most challenging task for a quantification algorithm. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  20. Some strategies to address the challenges of collecting observational data in a busy clinical environment.

    PubMed

    Jackson, Debra; McDonald, Glenda; Luck, Lauretta; Waine, Melissa; Wilkes, Lesley

    2016-01-01

    Studies drawing on observational methods can provide vital data to enhance healthcare. However, collecting observational data in clinical settings is replete with challenges, particularly where multiple data-collecting observers are used. Observers collecting data require shared understanding and training to ensure data quality, and particularly, to confirm accurate and consistent identification, discrimination and recording of data. The aim of this paper is to describe strategies for preparing and supporting multiple researchers tasked with collecting observational data in a busy, and often unpredictable, hospital environment. We hope our insights might assist future researchers undertaking research in similar settings.

  1. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  2. Towards a Density Functional Theory Exchange-Correlation Functional able to describe localization/delocalization

    NASA Astrophysics Data System (ADS)

    Mattsson, Ann E.; Wills, John M.

    2013-03-01

    The inability to computationally describe the physics governing the properties of actinides and their alloys is the poster child of failure of existing Density Functional Theory exchange-correlation functionals. The intricate competition between localization and delocalization of the electrons, present in these materials, exposes the limitations of functionals only designed to properly describe one or the other situation. We will discuss the manifestation of this competition in real materials and propositions on how to construct a functional able to accurately describe properties of these materials. I addition we will discuss both the importance of using the Dirac equation to describe the relativistic effects in these materials, and the connection to the physics of transition metal oxides. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  4. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  5. Describing wildland surface fuel loading for fire management: A review of approaches, methods and systems

    Treesearch

    Robert E. Keane

    2013-01-01

    Wildland fuelbeds are exceptionally complex, consisting of diverse particles of many sizes, types and shapes with abundances and properties that are highly variable in time and space. This complexity makes it difficult to accurately describe, classify, sample and map fuels for wildland fire research and management. As a result, many fire behaviour and effects software...

  6. Accurate assessment and identification of naturally occurring cellular cobalamins.

    PubMed

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V; Moreira, Edward S; Brasch, Nicola E; Jacobsen, Donald W

    2008-01-01

    Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo beta-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Experiments were designed to: 1) assess beta-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable beta-axial ligands. The cobalamin profile of cells grown in the presence of [ 57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [ 57Co]-aquacobalamin, [ 57Co]-glutathionylcobalamin, [ 57Co]-sulfitocobalamin, [ 57Co]-cyanocobalamin, [ 57Co]-adenosylcobalamin, [ 57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalaminacting as a scavenger cobalamin (i.e. "cold trapping"), the recovery of both [ 57Co]-glutathionylcobalamin and [ 57Co]-sulfitocobalamin decreases to low but consistent levels. In contrasts, the [ 57Co]-nitrocobalamin observed in the extracts prepared without excess aquacobalamin is undetected in extracts prepared with cold trapping. This demonstrates that beta-ligand exchange occur with non-covalently bound beta-ligands. The exception to this observation is cyanocobalamin with a non-exchangeable CN- group. It is now possible to obtain accurate profiles of cellular cobalamin.

  7. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  8. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  9. Accurate simulation of backscattering spectra in the presence of sharp resonances

    NASA Astrophysics Data System (ADS)

    Barradas, N. P.; Alves, E.; Jeynes, C.; Tosaki, M.

    2006-06-01

    In elastic backscattering spectrometry, the shape of the observed spectrum due to resonances in the nuclear scattering cross-section is influenced by many factors. If the energy spread of the beam before interaction is larger than the resonance width, then a simple convolution with the energy spread on exit and with the detection system resolution will lead to a calculated spectrum with a resonance much sharper than the observed signal. Also, the yield from a thin layer will not be calculated accurately. We have developed an algorithm for the accurate simulation of backscattering spectra in the presence of sharp resonances. Albeit approximate, the algorithm leads to dramatic improvements in the quality and accuracy of the simulations. It is simple to implement and leads to only small increases of the calculation time, being thus suitable for routine data analysis. We show different experimental examples, including samples with roughness and porosity.

  10. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  11. Accurate mass measurements and their appropriate use for reliable analyte identification.

    PubMed

    Godfrey, A Ruth; Brenton, A Gareth

    2012-09-01

    Accurate mass instrumentation is becoming increasingly available to non-expert users. This data can be mis-used, particularly for analyte identification. Current best practice in assigning potential elemental formula for reliable analyte identification has been described with modern informatic approaches to analyte elucidation, including chemometric characterisation, data processing and searching using facilities such as the Chemical Abstracts Service (CAS) Registry and Chemspider.

  12. Can All Cosmological Observations Be Accurately Interpreted with a Unique Geometry?

    NASA Astrophysics Data System (ADS)

    Fleury, Pierre; Dupuy, Hélène; Uzan, Jean-Philippe

    2013-08-01

    The recent analysis of the Planck results reveals a tension between the best fits for (Ωm0, H0) derived from the cosmic microwave background or baryonic acoustic oscillations on the one hand, and the Hubble diagram on the other hand. These observations probe the Universe on very different scales since they involve light beams of very different angular sizes; hence, the tension between them may indicate that they should not be interpreted the same way. More precisely, this Letter questions the accuracy of using only the (perturbed) Friedmann-Lemaître geometry to interpret all the cosmological observations, regardless of their angular or spatial resolution. We show that using an inhomogeneous “Swiss-cheese” model to interpret the Hubble diagram allows us to reconcile the inferred value of Ωm0 with the Planck results. Such an approach does not require us to invoke new physics nor to violate the Copernican principle.

  13. Accurate determination of segmented X-ray detector geometry

    PubMed Central

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  14. Accurate determination of segmented X-ray detector geometry

    DOE PAGES

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; ...

    2015-10-22

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical formore » many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. Furthermore, we show that the refined detector geometry greatly improves the results of experiments.« less

  15. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kearny, C.H.

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy ofmore » {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify

  16. Preoperative planning and intraoperative technique for accurate realignment of the Dwyer calcaneal osteotomy.

    PubMed

    Lamm, Bradley M; Gesheff, Martin G; Salton, Heather L; Dupuis, Travis W; Zeni, Ferras

    2012-01-01

    The Dwyer calcaneal osteotomy is an effective procedure for the correction of calcaneal varus deformity. However, no intraoperative method has been described to determine the amount of bone resection. We describe a simple intraoperative method for assuring accurate bone resection and measure the realignment effects of the Dwyer calcaneal osteotomy. We also review radiographic outcomes associated with 20 Dwyer calcaneal osteotomies (in 17 patients) using the intraoperative realignment technique described in this report. Preoperative and postoperative radiographs at a mean of 2.5 (range 1.5 to 5) years taken after Dwyer osteotomy were measured and compared, which revealed a mean reduction in calcaneal varus of 18° (range 2° to 36°) (p < .001), a mean decrease in the calcaneal inclination angle of 5° (range -40° to 7°) (p < .05), a mean decrease in medial calcaneal translation of 10 (range 0 to 18) mm (p < .001) relative to the tibia, and a mean dorsal translation of 2 (range 0 to 7) mm (p = .002). In an effort to attempt to structurally realign the calcaneus to a more rectus alignment, by means of Dwyer osteotomy, we recommend the use of the intraoperative bone wedge resection technique described in this report. Our experience with the patients described in this report demonstrates the usefulness of the intraoperative method that we describe in order to accurately restore the axial tibial and calcaneal relationship. Copyright © 2012 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  17. Observation of Children's Teeth as a Diagnostic Aid

    PubMed Central

    Gibson, Wm. M.; Conchie, John M.

    1964-01-01

    Current interest in tetracycline staining of teeth and other enamel defects led to this review. In the handicapped child structural defects that were seen in the dental enamel may provide a most accurate etiological clue. The method of determining the time of insult is described. Comments are made on seven states in which enamel dysplasia may be frequently observed. A simple means of identifying tetracycline pigment incorporated in dental enamel is outlined. Bilirubin staining of teeth is also shown and warnings are given about the indelible nature of these pigments. ImagesFig. 2Fig. 3Fig. 4 PMID:14118684

  18. PIC simulations of a three component plasma described by Kappa distribution functions as observed in Saturn's magnetosphere

    NASA Astrophysics Data System (ADS)

    Barbosa, Marcos; Alves, Maria Virginia; Simões Junior, Fernando

    2016-04-01

    In plasmas out of thermodynamic equilibrium the particle velocity distribution can be described by the so called Kappa distribution. These velocity distribution functions are a generalization of the Maxwellian distribution. Since 1960, Kappa velocity distributions were observed in several regions of interplanetary space and astrophysical plasmas. Using KEMPO1 particle simulation code, modified to introduce Kappa distribution functions as initial conditions for particle velocities, the normal modes of propagation were analyzed in a plasma containing two species of electrons with different temperatures and densities and ions as a third specie.This type of plasma is usually found in magnetospheres such as in Saturn. Numerical solutions for the dispersion relation for such a plasma predict the presence of an electron-acoustic mode, besides the Langmuir and ion-acoustic modes. In the presence of an ambient magnetic field, the perpendicular propagation (Bernstein mode) also changes, as compared to a Maxwellian plasma, due to the Kappa distribution function. Here results for simulations with and without external magnetic field are presented. The parameters for the initial conditions in the simulations were obtained from the Cassini spacecraft data. Simulation results are compared with numerical solutions of the dispersion relation obtained in the literature and they are in good agreement.

  19. Judgements about the relation between force and trajectory variables in verbally described ballistic projectile motion.

    PubMed

    White, Peter A

    2013-01-01

    How accurate are explicit judgements about familiar forms of object motion, and how are they made? Participants judged the relations between force exerted in kicking a soccer ball and variables that define the trajectory of the ball: launch angle, maximum height attained, and maximum distance reached. Judgements tended to conform to a simple heuristic that judged force tends to increase as maximum height and maximum distance increase, with launch angle not being influential. Support was also found for the converse prediction, that judged maximum height and distance tend to increase as the amount of force described in the kick increases. The observed judgemental tendencies did not resemble the objective relations, in which force is a function of interactions between the trajectory variables. This adds to a body of research indicating that practical knowledge based on experiences of actions on objects is not available to the processes that generate judgements in higher cognition and that such judgements are generated by simple rules that do not capture the objective interactions between the physical variables.

  20. Accurate Measurements of the Local Deuterium Abundance from HST Spectra

    NASA Technical Reports Server (NTRS)

    Linsky, Jeffrey L.

    1996-01-01

    An accurate measurement of the primordial value of D/H would provide a critical test of nucleosynthesis models for the early universe and the baryon density. I briefly summarize the ongoing HST observations of the interstellar H and D Lyman-alpha absorption for lines of sight to nearby stars and comment on recent reports of extragalactic D/H measurements.

  1. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  2. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  3. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  4. The remarkable ability of turbulence model equations to describe transition

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper demonstrates how well the k-omega turbulence model describes the nonlinear growth of flow instabilities from laminar flow into the turbulent flow regime. Viscous modifications are proposed for the k-omega model that yield close agreement with measurements and with Direct Numerical Simulation results for channel and pipe flow. These modifications permit prediction of subtle sublayer details such as maximum dissipation at the surface, k approximately y(exp 2) as y approaches 0, and the sharp peak value of k near the surface. With two transition specific closure coefficients, the model equations accurately predict transition for an incompressible flat-plate boundary layer. The analysis also shows why the k-epsilon model is so difficult to use for predicting transition.

  5. A novel approach to describing and detecting performance anti-patterns

    NASA Astrophysics Data System (ADS)

    Sheng, Jinfang; Wang, Yihan; Hu, Peipei; Wang, Bin

    2017-08-01

    Anti-pattern, as an extension to pattern, describes a widely used poor solution which can bring negative influence to application systems. Aiming at the shortcomings of the existing anti-pattern descriptions, an anti-pattern description method based on first order predicate is proposed. This method synthesizes anti-pattern forms and symptoms, which makes the description more accurate and has good scalability and versatility as well. In order to improve the accuracy of anti-pattern detection, a Bayesian classification method is applied in validation for detection results, which can reduce false negatives and false positives of anti-pattern detection. Finally, the proposed approach in this paper is applied to a small e-commerce system, the feasibility and effectiveness of the approach is demonstrated further through experiments.

  6. Sediment sorting along tidal sand waves: A comparison between field observations and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Van Oyen, Tomas; Blondeaux, Paolo; Van den Eynde, Dries

    2013-07-01

    A site-by-site comparison between field observations and theoretical predictions of sediment sorting patterns along tidal sand waves is performed for ten locations in the North Sea. At each site, the observed grain size distribution along the bottom topography and the geometry of the bed forms is described in detail and the procedure used to obtain the model parameters is summarized. The model appears to accurately describe the wavelength of the observed sand waves for the majority of the locations; still providing a reliable estimate for the other sites. In addition, it is found that for seven out of the ten locations, the qualitative sorting process provided by the model agrees with the observed grain size distribution. A discussion of the site-by-site comparison is provided which, taking into account uncertainties in the field data, indicates that the model grasps the major part of the key processes controlling the phenomenon.

  7. Observation procedure, observer gender, and behavior valence as determinants of sampling error in a behavior assessment analogue

    PubMed Central

    Farkas, Gary M.; Tharp, Roland G.

    1980-01-01

    Several factors thought to influence the representativeness of behavioral assessment data were examined in an analogue study using a multifactorial design. Systematic and unsystematic methods of observing group behavior were investigated using 18 male and 18 female observers. Additionally, valence properties of the observed behaviors were inspected. Observers' assessments of a videotape were compared to a criterion code that defined the population of behaviors. Results indicated that systematic observation procedures were more accurate than unsystematic procedures, though this factor interacted with gender of observer and valence of behavior. Additionally, males tended to sample more representatively than females. A third finding indicated that the negatively valenced behavior was overestimated, whereas the neutral and positively valenced behaviors were accurately assessed. PMID:16795631

  8. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.

    1996-01-01

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.

  9. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.

    1996-09-03

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.

  10. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  11. Are Registration of Disease Codes for Adult Anaphylaxis Accurate in the Emergency Department?

    PubMed Central

    Choi, Byungho; Lee, Hyeji

    2018-01-01

    Purpose There has been active research on anaphylaxis, but many study subjects are limited to patients registered with anaphylaxis codes. However, anaphylaxis codes tend to be underused. The aim of this study was to investigate the accuracy of anaphylaxis code registration and the clinical characteristics of accurate and inaccurate anaphylaxis registration in anaphylactic patients. Methods This retrospective study evaluated the medical records of adult patients who visited the university hospital emergency department between 2012 and 2016. The study subjects were divided into the groups with accurate and inaccurate anaphylaxis codes registered under anaphylaxis and other allergy-related codes and symptom-related codes, respectively. Results Among 211,486 patients, 618 (0.29%) had anaphylaxis. Of these, 161 and 457 were assigned to the accurate and inaccurate coding groups, respectively. The average age, transportation to the emergency department, past anaphylaxis history, cancer history, and the cause of anaphylaxis differed between the 2 groups. Cutaneous symptom manifested more frequently in the inaccurate coding group, while cardiovascular and neurologic symptoms were more frequently observed in the accurate group. Severe symptoms and non-alert consciousness were more common in the accurate group. Oxygen supply, intubation, and epinephrine were more commonly used as treatments for anaphylaxis in the accurate group. Anaphylactic patients with cardiovascular symptoms, severe symptoms, and epinephrine use were more likely to be accurately registered with anaphylaxis disease codes. Conclusions In case of anaphylaxis, more patients were registered inaccurately under other allergy-related codes and symptom-related codes rather than accurately under anaphylaxis disease codes. Cardiovascular symptoms, severe symptoms, and epinephrine treatment were factors associated with accurate registration with anaphylaxis disease codes in patients with anaphylaxis. PMID:29411554

  12. Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.

    2012-01-01

    The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.

  13. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  14. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  15. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  16. Accurate ab initio Quartic Force Fields of Cyclic and Bent HC2N Isomers

    NASA Technical Reports Server (NTRS)

    Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J.

    2012-01-01

    Highly correlated ab initio quartic force field (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC2N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted CCSD(T). Dunning s correlation-consistent basis sets cc-pVXZ, X=3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schr dinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by perturbation theory). On the other hand, this procedure (a QFF together with either perturbation theory or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC2N. All three isomers possess significant dipole moments, 3.05D, 3.06D, and 1.71D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.

  17. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  18. Stable and Spectrally Accurate Schemes for the Navier-Stokes Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Jun; Liu, Jie

    2011-01-01

    In this paper, we present an accurate, efficient and stable numerical method for the incompressible Navier-Stokes equations (NSEs). The method is based on (1) an equivalent pressure Poisson equation formulation of the NSE with proper pressure boundary conditions, which facilitates the design of high-order and stable numerical methods, and (2) the Krylov deferred correction (KDC) accelerated method of lines transpose (mbox MoL{sup T}), which is very stable, efficient, and of arbitrary order in time. Numerical tests with known exact solutions in three dimensions show that the new method is spectrally accurate in time, and a numerical order of convergence 9more » was observed. Two-dimensional computational results of flow past a cylinder and flow in a bifurcated tube are also reported.« less

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  3. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  4. Comparisons of GLM and LMA Observations

    NASA Astrophysics Data System (ADS)

    Thomas, R. J.; Krehbiel, P. R.; Rison, W.; Stanley, M. A.; Attanasio, A.

    2017-12-01

    Observations from 3-dimensional VHF lightning mapping arrays (LMAs) provide a valuable basis for evaluating the spatial accuracy and detection efficiencies of observations from the recently launched, optical-based Geosynchronous Lightning Mapper (GLM). In this presentation, we describe results of comparing the LMA and GLM observations. First, the observations are compared spatially and temporally at the individual event (pixel) level for sets of individual discharges. For LMA networks in Florida, Colorado, and Oklahoma, the GLM observations are well correlated time-wise with LMA observations but are systematically offset by one- to two pixels ( 10 to 15 or 20 km) in a southwesterly direction from the actual lightning activity. The graphical comparisons show a similar location uncertainty depending on the altitude at which the scattered light is emitted from the parent cloud, due to being observed at slant ranges. Detection efficiencies (DEs) can be accurately determined graphically for intervals where individual flashes in a storm are resolved time-wise, and DEs and false alarm rates can be automated using flash sorting algorithms for overall and/or larger storms. This can be done as a function of flash size and duration, and generally shows high detection rates for larger flashes. Preliminary results during the May 1 2017 ER-2 overflight of Colorado storms indicate decreased detection efficiency if the storm is obscured by an overlying cloud layer.

  5. Patient-specific distal radius locking plate for fixation and accurate 3D positioning in corrective osteotomy.

    PubMed

    Dobbe, J G G; Vroemen, J C; Strackee, S D; Streekstra, G J

    2014-11-01

    Preoperative three-dimensional planning methods have been described extensively. However, transferring the virtual plan to the patient is often challenging. In this report, we describe the management of a severely malunited distal radius fracture using a patient-specific plate for accurate spatial positioning and fixation. Twenty months postoperatively the patient shows almost painless reconstruction and a nearly normal range of motion.

  6. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  7. The Rényi divergence enables accurate and precise cluster analysis for localisation microscopy.

    PubMed

    Staszowska, Adela D; Fox-Roberts, Patrick; Hirvonen, Liisa M; Peddie, Christopher J; Collinson, Lucy M; Jones, Gareth E; Cox, Susan

    2018-06-01

    Clustering analysis is a key technique for quantitatively characterising structures in localisation microscopy images. To build up accurate information about biological structures, it is critical that the quantification is both accurate (close to the ground truth) and precise (has small scatter and is reproducible). Here we describe how the Rényi divergence can be used for cluster radius measurements in localisation microscopy data. We demonstrate that the Rényi divergence can operate with high levels of background and provides results which are more accurate than Ripley's functions, Voronoi tesselation or DBSCAN. Data supporting this research will be made accessible via a web link. Software codes developed for this work can be accessed via http://coxphysics.com/Renyi_divergence_software.zip. Implemented in C ++. Correspondence and requests for materials can be also addressed to the corresponding author. adela.staszowska@gmail.com or susan.cox@kcl.ac.uk. Supplementary data are available at Bioinformatics online.

  8. Accurate monoenergetic electron parameters of laser wakefield in a bubble model

    NASA Astrophysics Data System (ADS)

    Raheli, A.; Rahmatallahpur, S. H.

    2012-11-01

    A reliable analytical expression for the potential of plasma waves with phase velocities near the speed of light is derived. The presented spheroid cavity model is more consistent than the previous spherical and ellipsoidal model and it explains the mono-energetic electron trajectory more accurately, especially at the relativistic region. As a result, the quasi-mono-energetic electrons output beam interacting with the laser plasma can be more appropriately described with this model.

  9. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  10. Vibration Control Using a State Observer that Considers Disturbances of a Golf Swing Robot

    NASA Astrophysics Data System (ADS)

    Hoshino, Yohei; Kobayashi, Yukinori; Yamada, Gen

    In this paper, optimal control of a golf swing robot that is used to evaluate the performance of golf clubs is described. The robot has two joints, a rigid link and a flexible link that is a golf club. A mathematical model of the golf club is derived by Hamilton’s principle in consideration of bending and torsional stiffness and in consideration of eccentricity of the center of gravity of the club head on the shaft axis. A linear quadratic regulator (LQR) that considers the vibration of the club shaft is used to stop the robot during the follow-through. Since the robot moves fast and has strong non-linearity, an ordinary state observer for a linear system cannot accurately estimate the states of the system. A state observer that considers disturbances accurately estimates the state variables that cannot be measured. The results of numerical simulation are compared with experimental results obtained by using a swing robot.

  11. Describing the evolution of mobile technology usage for Latino patients and comparing findings to national mHealth estimates

    PubMed Central

    Ford, Kelsey; Terp, Sophie; Abramson, Tiffany; Ruiz, Ryan; Camilon, Marissa; Coyne, Christopher J; Lam, Chun Nok; Menchine, Michael; Burner, Elizabeth

    2016-01-01

    Objectives Describe the change in mobile technology used by an urban Latino population between 2011 and 2014, and compare findings with national estimates. Materials and Methods Patients were surveyed on medical history and mobile technology use. We analyzed specific areas of mobile health capacity stratified by chronic disease, age, language preference, and educational attainment. Results Of 2144 Latino patients, the percentage that owned a cell phone and texted were in-line with Pew estimates, but app usage was not. Patients with chronic disease had reduced access to mobile devices (P < .001) and lower use of mobile phone functionalities. Discussion Prior research suggests that Latinos can access mHealth; however, we observed lower rates among Latino patients actively seeking heath care. Conclusion Published national estimates do not accurately reflect the mobile technology use of Latino patients served by our public safety-net facility. The difference is greater for older, less educated patients with chronic disease. PMID:26995564

  12. Cardiac vagal flexibility and accurate personality impressions: Examining a physiological correlate of the good judge.

    PubMed

    Human, Lauren J; Mendes, Wendy Berry

    2018-02-23

    Research has long sought to identify which individuals are best at accurately perceiving others' personalities or are good judges, yet consistent predictors of this ability have been difficult to find. In the current studies, we revisit this question by examining a novel physiological correlate of social sensitivity, cardiac vagal flexibility, which reflects dynamic modulation of cardiac vagal control. We examined whether greater cardiac vagal flexibility was associated with forming more accurate personality impressions, defined as viewing targets more in line with their distinctive self-reported profile of traits, in two studies, including a thin-slice video perceptions study (N = 109) and a dyadic interaction study (N = 175). Across studies, we found that individuals higher in vagal flexibility formed significantly more accurate first impressions of others' more observable personality traits (e.g., extraversion, creativity, warmth). These associations held while including a range of relevant covariates, including cardiac vagal tone, sympathetic activation, and gender. In sum, social sensitivity as indexed by cardiac vagal flexibility is linked to forming more accurate impressions of others' observable traits, shedding light on a characteristic that may help to identify the elusive good judge and providing insight into its neurobiological underpinnings. © 2018 Wiley Periodicals, Inc.

  13. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  14. Third-order accurate conservative method on unstructured meshes for gasdynamic simulations

    NASA Astrophysics Data System (ADS)

    Shirobokov, D. A.

    2017-04-01

    A third-order accurate finite-volume method on unstructured meshes is proposed for solving viscous gasdynamic problems. The method is described as applied to the advection equation. The accuracy of the method is verified by computing the evolution of a vortex on meshes of various degrees of detail with variously shaped cells. Additionally, unsteady flows around a cylinder and a symmetric airfoil are computed. The numerical results are presented in the form of plots and tables.

  15. Space Station Freedom - Optimized to support microgravity research and earth observations

    NASA Technical Reports Server (NTRS)

    Bilardo, Vincent J., Jr.; Herman, Daniel J.

    1990-01-01

    The Space Station Freedom Program is reviewed, with particular attention given to the Space Station configuration, program elements description, and utilization accommodation. Since plans call for the assembly of the initial SSF configuration over a 3-year time span, it is NASA's intention to perform useful research on it during the assembly process. The research will include microgravity experiments and observational sciences. The specific attributes supporting these attempts are described, such as maintainance of a very low microgravity level and continuous orientation of the vehicle to maintain a stable, accurate local-vertical/local-horizontal attitude.

  16. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    USGS Publications Warehouse

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  17. Describing dengue epidemics: Insights from simple mechanistic models

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2012-09-01

    We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.

  18. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    PubMed Central

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  19. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  20. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  1. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  2. A safe and accurate method to perform esthetic mandibular contouring surgery for Far Eastern Asians.

    PubMed

    Hsieh, A M-C; Huon, L-K; Jiang, H-R; Liu, S Y-C

    2017-05-01

    A tapered mandibular contour is popular with Far Eastern Asians. This study describes a safe and accurate method of using preoperative virtual surgical planning (VSP) and an intraoperative ostectomy guide to maximize the esthetic outcomes of mandibular symmetry and tapering while mitigating injury to the inferior alveolar nerve (IAN). Twelve subjects with chief complaints of a wide and square lower face underwent this protocol from January to June 2015. VSP was used to confirm symmetry and preserve the IAN while maximizing the surgeon's ability to taper the lower face via mandibular inferior border ostectomy. The accuracy of this method was confirmed by superimposition of the perioperative computed tomography scans in all subjects. No subjects complained of prolonged paresthesia after 3 months. A safe and accurate protocol for achieving an esthetic lower face in indicated Far Eastern individuals is described. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  3. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  4. Zemax simulations describing collective effects in transition and diffraction radiation.

    PubMed

    Bisesto, F G; Castellano, M; Chiadroni, E; Cianchi, A

    2018-02-19

    Transition and diffraction radiation from charged particles is commonly used for diagnostics purposes in accelerator facilities as well as THz sources for spectroscopy applications. Therefore, an accurate analysis of the emission process and the transport optics is crucial to properly characterize the source and precisely retrieve beam parameters. In this regard, we have developed a new algorithm, based on Zemax, to simulate both transition and diffraction radiation as generated by relativistic electron bunches, therefore considering collective effects. In particular, unlike other previous works, we take into account electron beam physical size and transverse momentum, reproducing some effects visible on the produced radiation, not observable in a single electron analysis. The simulation results have been compared with two experiments showing an excellent agreement.

  5. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  6. Babylonian observations

    NASA Astrophysics Data System (ADS)

    Brown, D.

    Very few cuneiform records survive from Mesopotamia of datable astronomical observations made prior to the mid-eighth century BC. Those that do record occasional eclipses, and in one isolated case the dates of the heliacal rising and setting of Venus over a few years sometime in the first half of the second millennium BC. After the mid-eighth century BC the situation changes dramatically. Incomplete records of daily observations of astronomical and meteorological events are preserved from c. 747 BC until the Christian Period. These records are without accompanying ominous interpretation, although it is highly probable that they were compiled by diviners for astrological purposes. They include numerous observations of use to historical astronomers, such as the times of eclipses and occultations, and the dates of comet appearances and meteor showers. The question arises as to why such records do not survive from earlier times; celestial divination was employed as far back as the third millenium BC. It is surely not without importance that the earliest known accurate astronomical predictions accompany the later records, and that the mid-eighth century BC ushered in a period of centralised Assyrian control of Mesopotamia and the concomitant employment by the Assyrian ruler of large numbers of professional celestial diviners. The programme of daily observations evidently began when a high premium was first set on the accurate astronomical prediction of ominous events. It is in this light that we must approach this valuable source material for historical astronomy.

  7. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  8. Cell-accurate optical mapping across the entire developing heart.

    PubMed

    Weber, Michael; Scherf, Nico; Meyer, Alexander M; Panáková, Daniela; Kohl, Peter; Huisken, Jan

    2017-12-29

    Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca 2+ -mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs.

  9. Cell-accurate optical mapping across the entire developing heart

    PubMed Central

    Meyer, Alexander M; Panáková, Daniela; Kohl, Peter

    2017-01-01

    Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca2+-mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs. PMID:29286002

  10. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  11. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  12. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  13. Limb-Enhancer Genie: An accessible resource of accurate enhancer predictions in the developing limb

    DOE PAGES

    Monti, Remo; Barozzi, Iros; Osterwalder, Marco; ...

    2017-08-21

    Epigenomic mapping of enhancer-associated chromatin modifications facilitates the genome-wide discovery of tissue-specific enhancers in vivo. However, reliance on single chromatin marks leads to high rates of false-positive predictions. More sophisticated, integrative methods have been described, but commonly suffer from limited accessibility to the resulting predictions and reduced biological interpretability. Here we present the Limb-Enhancer Genie (LEG), a collection of highly accurate, genome-wide predictions of enhancers in the developing limb, available through a user-friendly online interface. We predict limb enhancers using a combination of > 50 published limb-specific datasets and clusters of evolutionarily conserved transcription factor binding sites, taking advantage ofmore » the patterns observed at previously in vivo validated elements. By combining different statistical models, our approach outperforms current state-of-the-art methods and provides interpretable measures of feature importance. Our results indicate that including a previously unappreciated score that quantifies tissue-specific nuclease accessibility significantly improves prediction performance. We demonstrate the utility of our approach through in vivo validation of newly predicted elements. Moreover, we describe general features that can guide the type of datasets to include when predicting tissue-specific enhancers genome-wide, while providing an accessible resource to the general biological community and facilitating the functional interpretation of genetic studies of limb malformations.« less

  14. Describing Myxococcus xanthus Aggregation Using Ostwald Ripening Equations for Thin Liquid Films

    PubMed Central

    Bahar, Fatmagül; Pratt-Szeliga, Philip C.; Angus, Stuart; Guo, Jiaye; Welch, Roy D.

    2014-01-01

    When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions. PMID:25231319

  15. Development and Application of Learning Materials to Help Students Understand Ten Statements Describing the Nature of Scientific Observation

    ERIC Educational Resources Information Center

    Kim, Sangsoo; Park, Jongwon

    2018-01-01

    Observing scientific events or objects is a complex process that occurs through the interaction between the observer's knowledge or expectations, the surrounding context, physiological features of the human senses, scientific inquiry processes, and the use of observational instruments. Scientific observation has various features specific to this…

  16. Accurate Cell Division in Bacteria: How Does a Bacterium Know Where its Middle Is?

    NASA Astrophysics Data System (ADS)

    Howard, Martin; Rutenberg, Andrew

    2004-03-01

    I will discuss the physical principles lying behind the acquisition of accurate positional information in bacteria. A good application of these ideas is to the rod-shaped bacterium E. coli which divides precisely at its cellular midplane. This positioning is controlled by the Min system of proteins. These proteins coherently oscillate from end to end of the bacterium. I will present a reaction-diffusion model that describes the diffusion of the Min proteins, and their binding/unbinding from the cell membrane. The system possesses an instability that spontaneously generates the Min oscillations, which control accurate placement of the midcell division site. I will then discuss the role of fluctuations in protein dynamics, and investigate whether fluctuations set optimal protein concentration levels. Finally I will examine cell division in a different bacteria, B. subtilis. where different physical principles are used to regulate accurate cell division. See: Howard, Rutenberg, de Vet: Dynamic compartmentalization of bacteria: accurate division in E. coli. Phys. Rev. Lett. 87 278102 (2001). Howard, Rutenberg: Pattern formation inside bacteria: fluctuations due to the low copy number of proteins. Phys. Rev. Lett. 90 128102 (2003). Howard: A mechanism for polar protein localization in bacteria. J. Mol. Biol. 335 655-663 (2004).

  17. Yield Estimation for Semipalatinsk Underground Nuclear Explosions Using Seismic Surface-wave Observations at Near-regional Distances

    NASA Astrophysics Data System (ADS)

    Adushkin, V. V.

    - A statistical procedure is described for estimating the yields of underground nuclear tests at the former Soviet Semipalatinsk test site using the peak amplitudes of short-period surface waves observed at near-regional distances (Δ < 150 km) from these explosions. This methodology is then applied to data recorded from a large sample of the Semipalatinsk explosions, including the Soviet JVE explosion of September 14, 1988, and it is demonstrated that it provides seismic estimates of explosion yield which are typically within 20% of the yields determined for these same explosions using more accurate, non-seismic techniques based on near-source observations.

  18. Data Quality Assessment of In Situ and Altimeter Observations Through Two-Way Intercomparison Methods

    NASA Astrophysics Data System (ADS)

    Guinehut, Stephanie; Valladeau, Guillaume; Legeais, Jean-Francois; Rio, Marie-Helene; Ablain, Michael; Larnicol, Gilles

    2013-09-01

    This proceeding presents an overview of the two-way inter-comparison activities performed at CLS for both space and in situ observation agencies and why this activity is a required step to obtain accurate and homogenous data sets that can then be used together for climate studies or in assimilation/validation tools. We first describe the work performed in the frame of the SALP program to assess the stability of altimeter missions through SSH comparisons with tide gauges (GLOSS/CLIVAR network). Then, we show how the SSH comparison between the Argo array and altimeter time series allows the detection of drifts or jumps in altimeter (SALP program) but also for some Argo floats (Ifremer/Coriolis center). Lastly, we describe how the combine use of altimeter and wind observations helps the detection of drogue loss of surface drifting buoys (GDP network) and allow the computation of a correction term for wind slippage.

  19. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  20. Accurate LC Peak Boundary Detection for 16 O/ 18 O Labeled LC-MS Data

    PubMed Central

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang (SJ); Zhang, Jianqiu (Michelle)

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements. PMID:24115998

  1. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  2. Describing the evolution of mobile technology usage for Latino patients and comparing findings to national mHealth estimates.

    PubMed

    Arora, Sanjay; Ford, Kelsey; Terp, Sophie; Abramson, Tiffany; Ruiz, Ryan; Camilon, Marissa; Coyne, Christopher J; Lam, Chun Nok; Menchine, Michael; Burner, Elizabeth

    2016-09-01

    Describe the change in mobile technology used by an urban Latino population between 2011 and 2014, and compare findings with national estimates. Patients were surveyed on medical history and mobile technology use. We analyzed specific areas of mobile health capacity stratified by chronic disease, age, language preference, and educational attainment. Of 2144 Latino patients, the percentage that owned a cell phone and texted were in-line with Pew estimates, but app usage was not. Patients with chronic disease had reduced access to mobile devices (P < .001) and lower use of mobile phone functionalities. Prior research suggests that Latinos can access mHealth; however, we observed lower rates among Latino patients actively seeking heath care. Published national estimates do not accurately reflect the mobile technology use of Latino patients served by our public safety-net facility. The difference is greater for older, less educated patients with chronic disease. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Calibration uncertainty for Advanced LIGO's first and second observing runs

    NASA Astrophysics Data System (ADS)

    Cahillane, Craig; Betzwieser, Joe; Brown, Duncan A.; Goetz, Evan; Hall, Evan D.; Izumi, Kiwamu; Kandhasamy, Shivaraj; Karki, Sudarshan; Kissel, Jeff S.; Mendell, Greg; Savage, Richard L.; Tuyenbayev, Darkhan; Urban, Alex; Viets, Aaron; Wade, Madeline; Weinstein, Alan J.

    2017-11-01

    Calibration of the Advanced LIGO detectors is the quantification of the detectors' response to gravitational waves. Gravitational waves incident on the detectors cause phase shifts in the interferometer laser light which are read out as intensity fluctuations at the detector output. Understanding this detector response to gravitational waves is crucial to producing accurate and precise gravitational wave strain data. Estimates of binary black hole and neutron star parameters and tests of general relativity require well-calibrated data, as miscalibrations will lead to biased results. We describe the method of producing calibration uncertainty estimates for both LIGO detectors in the first and second observing runs.

  4. On canonical cylinder sections for accurate determination of contact angle in microgravity

    NASA Technical Reports Server (NTRS)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  5. Observations of Near-Surface Current Shear Help Describe Oceanic Oil and Plastic Transport

    NASA Astrophysics Data System (ADS)

    Laxague, Nathan J. M.; Ö-zgökmen, Tamay M.; Haus, Brian K.; Novelli, Guillaume; Shcherbina, Andrey; Sutherland, Peter; Guigand, Cédric M.; Lund, Björn; Mehta, Sanchit; Alday, Matias; Molemaker, Jeroen

    2018-01-01

    Plastics and spilled oil pose a critical threat to marine life and human health. As a result of wind forcing and wave motions, theoretical and laboratory studies predict very strong velocity variation with depth over the upper few centimeters of the water column, an observational blind spot in the real ocean. Here we present the first-ever ocean measurements of the current vector profile defined to within 1 cm of the free surface. In our illustrative example, the current magnitude averaged over the upper 1 cm of the ocean is shown to be nearly four times the average over the upper 10 m, even for mild forcing. Our findings indicate that this shear will rapidly separate pieces of marine debris which vary in size or buoyancy, making consideration of these dynamics essential to an improved understanding of the pathways along which marine plastics and oil are transported.

  6. Transition between Two Regimes Describing Internal Fluctuation of DNA in a Nanochannel

    PubMed Central

    Su, Tianxiang; Das, Somes K.; Xiao, Ming; Purohit, Prashant K.

    2011-01-01

    We measure the thermal fluctuation of the internal segments of a piece of DNA confined in a nanochannel about 50100 nm wide. This local thermodynamic property is key to accurate measurement of distances in genomic analysis. For DNA in 100 nm channels, we observe a critical length scale 10 m for the mean extension of internal segments, below which the de Gennes' theory describes the fluctuations with no fitting parameters, and above which the fluctuation data falls into Odijk's deflection theory regime. By analyzing the probability distributions of the extensions of the internal segments, we infer that folded structures of length 150250 nm, separated by 10 m exist in the confined DNA during the transition between the two regimes. For 50 nm channels we find that the fluctuation is significantly reduced since the Odijk regime appears earlier. This is critical for genomic analysis. We further propose a more detailed theory based on small fluctuations and incorporating the effects of confinement to explicitly calculate the statistical properties of the internal fluctuations. Our theory is applicable to polymers with heterogeneous mechanical properties confined in non-uniform channels. We show that existing theories for the end-to-end extension/fluctuation of polymers can be used to study the internal fluctuations only when the contour length of the polymer is many times larger than its persistence length. Finally, our results suggest that introducing nicks in the DNA will not change its fluctuation behavior when the nick density is below 1 nick per kbp DNA. PMID:21423606

  7. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  8. Using Combined Marine Spatial Planning Tools and Observing System Experiments to define Gaps in the Emerging European Ocean Observing System.

    NASA Astrophysics Data System (ADS)

    Nolan, G.; Pinardi, N.; Vukicevic, T.; Le Traon, P. Y.; Fernandez, V.

    2016-02-01

    Ocean observations are critical to providing accurate ocean forecasts that support operational decision making in European open and coastal seas. Observations are available in many forms from Fixed platforms e.g. Moored Buoys and tide gauges, underway measurements from Ferrybox systems, High Frequency radars and more recently from underwater Gliders and profiling floats. Observing System Simulation Experiments have been conducted to examine the relative contribution of each type of platform to an improvement in our ability to accurately forecast the future state of the ocean with HF radar and Gliders showing particular promise in improving model skill. There is considerable demand for ecosystem products and services from today's ocean observing system and biogeochemical observations are still relatively sparse particularly in coastal and shelf seas. There is a need to widen the techniques used to assess the fitness for purpose and gaps in the ocean observing system. As well as Observing System Simulation Experiments that quantify the effect of observations on the overall model skill we present a gap analysis based on (1) Examining where high model skill is required based on a marine spatial planning analysis of European seas i.e where does activity take place that requires more accurate forecasts? and (2) assessing gaps based on the capacity of the observing system to answer key societal challenges e.g. site suitability for aquaculture and ocean energy, oil spill response and contextual oceanographic products for fisheries and ecosystems. The broad based analysis will inform the development of the proposed European Ocean Observing System as a contribution to the Global Ocean Observing System (GOOS).

  9. Nonlinear analysis of a rotor-bearing system using describing functions

    NASA Astrophysics Data System (ADS)

    Maraini, Daniel; Nataraj, C.

    2018-04-01

    This paper presents a technique for modelling the nonlinear behavior of a rotor-bearing system with Hertzian contact, clearance, and rotating unbalance. The rotor-bearing system is separated into linear and nonlinear components, and the nonlinear bearing force is replaced with an equivalent describing function gain. The describing function captures the relationship between the amplitude of the fundamental input to the nonlinearity and the fundamental output. The frequency response is constructed for various values of the clearance parameter, and the results show the presence of a jump resonance in bearings with both clearance and preload. Nonlinear hardening type behavior is observed in the case with clearance and softening behavior is observed for the case with preload. Numerical integration is also carried out on the nonlinear equations of motion showing strong agreement with the approximate solution. This work could easily be extended to include additional nonlinearities that arise from defects, providing a powerful diagnostic tool.

  10. Remote battlefield observer technology (REBOT)

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Uhlmann, Jeffrey K.; Julier, Simon J.; Kuo, Eddy

    1999-07-01

    Battlefield situation awareness is the most fundamental prerequisite for effective command and control. Information about the state of the battlefield must be both timely and accurate. Imagery data is of particular importance because it can be directly used to monitor the deployment of enemy forces in a given area of interest, the traversability of the terrain in that area, as well as many other variables that are critical for tactical and force level planning. In this paper we describe prototype REmote Battlefield Observer Technology (REBOT) that can be deployed at specified locations and subsequently tasked to transmit high resolution panoramic imagery of its surrounding area. Although first generation REBOTs will be stationary platforms, the next generation will be autonomous ground vehicles capable of transporting themselves to specified locations. We argue that REBOT fills a critical gap in present situation awareness technologies. We expect to provide results of REBOT tests to be conducted at the 1999 Marines Advanced Warfighting Demonstration.

  11. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  12. Kinetic isotope effects and how to describe them

    PubMed Central

    Karandashev, Konstantin; Xu, Zhen-Hao; Meuwly, Markus; Vaníček, Jiří; Richardson, Jeremy O.

    2017-01-01

    We review several methods for computing kinetic isotope effects in chemical reactions including semiclassical and quantum instanton theory. These methods describe both the quantization of vibrational modes as well as tunneling and are applied to the ⋅H + H2 and ⋅H + CH4 reactions. The absolute rate constants computed with the semiclassical instanton method both using on-the-fly electronic structure calculations and fitted potential-energy surfaces are also compared directly with exact quantum dynamics results. The error inherent in the instanton approximation is found to be relatively small and similar in magnitude to that introduced by using fitted surfaces. The kinetic isotope effect computed by the quantum instanton is even more accurate, and although it is computationally more expensive, the efficiency can be improved by path-integral acceleration techniques. We also test a simple approach for designing potential-energy surfaces for the example of proton transfer in malonaldehyde. The tunneling splittings are computed, and although they are found to deviate from experimental results, the ratio of the splitting to that of an isotopically substituted form is in much better agreement. We discuss the strengths and limitations of the potential-energy surface and based on our findings suggest ways in which it can be improved. PMID:29282447

  13. Rating a Teacher Observation Tool: Five Ways to Ensure Classroom Observations are Focused and Rigorous

    ERIC Educational Resources Information Center

    New Teacher Project, 2011

    2011-01-01

    This "Rating a Teacher Observation Tool" identifies five simple questions and provides an easy-to-use scorecard to help policymakers decide whether an observation framework is likely to produce fair and accurate results. The five questions are: (1) Do the criteria and tools cover the classroom performance areas most connected to student outcomes?…

  14. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  15. Improving the Operations of the Earth Observing One Mission via Automated Mission Planning

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Tran, Daniel; Rabideau, Gregg; Schaffer, Steve; Mandl, Daniel; Frye, Stuart

    2010-01-01

    We describe the modeling and reasoning about operations constraints in an automated mission planning system for an earth observing satellite - EO-1. We first discuss the large number of elements that can be naturally represented in an expressive planning and scheduling framework. We then describe a number of constraints that challenge the current state of the art in automated planning systems and discuss how we modeled these constraints as well as discuss tradeoffs in representation versus efficiency. Finally we describe the challenges in efficiently generating operations plans for this mission. These discussions involve lessons learned from an operations model that has been in use since Fall 2004 (called R4) as well as a newer more accurate operations model operational since June 2009 (called R5). We present analysis of the R5 software documenting a significant (greater than 50%) increase in the number of weekly observations scheduled by the EO-1 mission. We also show that the R5 mission planning system produces schedules within 15% of an upper bound on optimal schedules. This operational enhancement has created value of millions of dollars US over the projected remaining lifetime of the EO-1 mission.

  16. Astrometric Observation of MACHO Gravitational Microlensing

    NASA Technical Reports Server (NTRS)

    Boden, A. F.; Shao, M.; Van Buren, D.

    1997-01-01

    This paper discusses the prospects for astrometric observation of MACHO gravitational microlensing events. We derive the expected astrometric observables for a simple microlensing event assuming a dark MACHO, and demonstrate that accurate astrometry can determine the lens mass, distance, and proper motion in a very general fashion.

  17. Accurate EPR radiosensitivity calibration using small sample masses

    NASA Astrophysics Data System (ADS)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  18. Adiabatically describing rare earths using microscopic deformations

    NASA Astrophysics Data System (ADS)

    Nobre, Gustavo; Dupuis, Marc; Herman, Michal; Brown, David

    2017-09-01

    Recent works showed that reactions on well-deformed nuclei in the rare-earth region are very well described by an adiabatic method. This assumes a spherical optical potential (OP) accounting for non-rotational degrees of freedom while the deformed configuration is described by couplings to states of the g.s. rotational band. This method has, apart from the global OP, only the deformation parameters as inputs, with no additional fit- ted variables. For this reason, it has only been applied to nuclei with well-measured deformations. With the new computational capabilities, microscopic large-scale calculations of deformation parameters within the HFB method based on the D1S Gogny force are available in the literature. We propose to use such microscopic deformations in our adi- abatic method, allowing us to reproduce the cross sections agreements observed in stable nuclei, and to reliably extend this description to nuclei far from stability, describing the whole rare-earth region. Since all cross sections, such as capture and charge exchange, strongly depend on the correct calculation of absorption from the incident channel (from direct reaction mechanisms), this approach significantly improves the accuracy of cross sections and transitions relevant to astrophysical studies. The work at BNL was sponsored by the Office of Nuclear Physics, Office of Science of the US Department of Energy, under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC.

  19. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  20. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  1. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    PubMed Central

    Moore, Michael J; Dhingra, Amit; Soltis, Pamela S; Shaw, Regina; Farmerie, William G; Folta, Kevin M; Soltis, Douglas E

    2006-01-01

    Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20) System (454 Life Sciences Corporation), to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae) and Platanus occidentalis (Platanaceae). Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy observed in the GS 20 plastid

  2. A time-accurate finite volume method valid at all flow velocities

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1993-01-01

    A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil

  3. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  4. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  5. Predictions for Swift Follow-up Observations of Advanced LIGO/Virgo Gravitational Wave Sources

    NASA Astrophysics Data System (ADS)

    Racusin, Judith; Evans, Phil; Connaughton, Valerie

    2015-04-01

    The likely detection of gravitational waves associated with the inspiral of neutron star binaries by the upcoming advanced LIGO/Virgo observatories will be complemented by searches for electromagnetic counterparts over large areas of the sky by Swift and other observatories. As short gamma-ray bursts (GRB) are the most likely electromagnetic counterpart candidates to these sources, we can make predictions based upon the last decade of GRB observations by Swift and Fermi. Swift is uniquely capable of accurately localizing new transients rapidly over large areas of the sky in single and tiled pointings, enabling ground-based follow-up. We describe simulations of the detectability of short GRB afterglows by Swift given existing and hypothetical tiling schemes with realistic observing conditions and delays, which guide the optimal observing strategy and improvements provided by coincident detection with observatories such as Fermi-GBM.

  6. Is Cancer Information Exchanged on Social Media Scientifically Accurate?

    PubMed

    Gage-Bouchard, Elizabeth A; LaValley, Susan; Warunek, Molli; Beaupin, Lynda Kwon; Mollica, Michelle

    2017-07-19

    Cancer patients and their caregivers are increasingly using social media as a platform to share cancer experiences, connect with support, and exchange cancer-related information. Yet, little is known about the nature and scientific accuracy of cancer-related information exchanged on social media. We conducted a content analysis of 12 months of data from 18 publically available Facebook Pages hosted by parents of children with acute lymphoblastic leukemia (N = 15,852 posts) and extracted all exchanges of medically-oriented cancer information. We systematically coded for themes in the nature of cancer-related information exchanged on personal Facebook Pages and two oncology experts independently evaluated the scientific accuracy of each post. Of the 15,852 total posts, 171 posts contained medically-oriented cancer information. The most frequent type of cancer information exchanged was information related to treatment protocols and health services use (35%) followed by information related to side effects and late effects (26%), medication (16%), medical caregiving strategies (13%), alternative and complementary therapies (8%), and other (2%). Overall, 67% of all cancer information exchanged was deemed medically/scientifically accurate, 19% was not medically/scientifically accurate, and 14% described unproven treatment modalities. These findings highlight the potential utility of social media as a cancer-related resource, but also indicate that providers should focus on recommending reliable, evidence-based sources to patients and caregivers.

  7. Applications of high spectral resolution FTIR observations demonstrated by the radiometrically accurate ground-based AERI and the scanning HIS aircraft instruments

    NASA Astrophysics Data System (ADS)

    Revercomb, Henry E.; Knuteson, Robert O.; Best, Fred A.; Tobin, David C.; Smith, William L.; Feltz, Wayne F.; Petersen, Ralph A.; Antonelli, Paolo; Olson, Erik R.; LaPorte, Daniel D.; Ellington, Scott D.; Werner, Mark W.; Dedecker, Ralph G.; Garcia, Raymond K.; Ciganovich, Nick N.; Howell, H. Benjamin; Vinson, Kenneth; Ackerman, Steven A.

    2003-06-01

    Development in the mid 80s of the High-resolution Interferometer Sounder (HIS) for the high altitude NASA ER2 aircraft demonstrated the capability for advanced atmospheric temperature and water vapor sounding and set the stage for new satellite instruments that are now becoming a reality [AIRS (2002), CrIS (2006), IASI (2006), GIFTS (2005/6)]. Follow-on developments at the University of Wisconsin-Madison that employ interferometry for a wide range of Earth observations include the ground-based Atmospheric Emitted Radiance Interferometer (AERI) and the Scanning HIS aircraft instrument (S-HIS). The AERI was developed for the US DOE Atmospheric Radiation Measurement (ARM) Program, primarily to provide highly accurate radiance spectra for improving radiative transfer models. The continuously operating AERI soon demonstrated valuable new capabilities for sensing the rapidly changing state of the boundary layer and properties of the surface and clouds. The S-HIS is a smaller version of the original HIS that uses cross-track scanning to enhance spatial coverage. S-HIS and its close cousin, the NPOESS Airborne Sounder Testbed (NAST) operated by NASA Langley, are being used for satellite instrument validation and for atmospheric research. The calibration and noise performance of these and future satellite instruments is key to optimizing their remote sensing products. Recently developed techniques for improving effective radiometric performance by removing noise in post-processing is a primary subject of this paper.

  8. Accurate Modelling of Surface Currents and Internal Tides in a Semi-enclosed Coastal Sea

    NASA Astrophysics Data System (ADS)

    Allen, S. E.; Soontiens, N. K.; Dunn, M. B. H.; Liu, J.; Olson, E.; Halverson, M. J.; Pawlowicz, R.

    2016-02-01

    The Strait of Georgia is a deep (400 m), strongly stratified, semi-enclosed coastal sea on the west coast of North America. We have configured a baroclinic model of the Strait of Georgia and surrounding coastal waters using the NEMO ocean community model. We run daily nowcasts and forecasts and publish our sea-surface results (including storm surge warnings) to the web (salishsea.eos.ubc.ca/storm-surge). Tides in the Strait of Georgia are mixed and large. The baroclinic model and previous barotropic models accurately represent tidal sea-level variations and depth mean currents. The baroclinic model reproduces accurately the diurnal but not the semi-diurnal baroclinic tidal currents. In the Southern Strait of Georgia, strong internal tidal currents at the semi-diurnal frequency are observed. Strong semi-diurnal tides are also produced in the model, but are almost 180 degrees out of phase with the observations. In the model, in the surface, the barotropic and baroclinic tides reinforce, whereas the observations show that at the surface the baroclinic tides oppose the barotropic. As such the surface currents are very poorly modelled. Here we will present evidence of the internal tidal field from observations. We will discuss the generation regions of the tides, the necessary modifications to the model required to correct the phase, the resulting baroclinic tides and the improvements in the surface currents.

  9. Supervision of Student Teachers: Objective Observation.

    ERIC Educational Resources Information Center

    Neide, Joan

    1996-01-01

    By keeping accurate records, student teacher supervisors can present concrete evidence about physical education student teachers' classroom performance. The article describes various ways to collect objective data, including running records, at-task records, verbal flow records, class traffic records, interaction analysis records, and global scan…

  10. How accurately do drivers evaluate their own driving behavior? An on-road observational study.

    PubMed

    Amado, Sonia; Arıkan, Elvan; Kaça, Gülin; Koyuncu, Mehmet; Turkan, B Nilay

    2014-02-01

    Self-assessment of driving skills became a noteworthy research subject in traffic psychology, since by knowing one's strenghts and weaknesses, drivers can take an efficient compensatory action to moderate risk and to ensure safety in hazardous environments. The current study aims to investigate drivers' self-conception of their own driving skills and behavior in relation to expert evaluations of their actual driving, by using naturalistic and systematic observation method during actual on-road driving session and to assess the different aspects of driving via comprehensive scales sensitive to different specific aspects of driving. 19-63 years old male participants (N=158) attended an on-road driving session lasting approximately 80min (45km). During the driving session, drivers' errors and violations were recorded by an expert observer. At the end of the driving session, observers completed the driver evaluation questionnaire, while drivers completed the driving self-evaluation questionnaire and Driver Behavior Questionnaire (DBQ). Low to moderate correlations between driver and observer evaluations of driving skills and behavior, mainly on errors and violations of speed and traffic lights was found. Furthermore, the robust finding that drivers evaluate their driving performance as better than the expert was replicated. Over-positive appraisal was higher among drivers with higher error/violation score and with the ones that were evaluated by the expert as "unsafe". We suggest that the traffic environment might be regulated by increasing feedback indicators of errors and violations, which in turn might increase the insight into driving performance. Improving self-awareness by training and feedback sessions might play a key role for reducing the probability of risk in their driving activity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Anchoring the Population II Distance Scale: Accurate Ages for Globular Clusters

    NASA Technical Reports Server (NTRS)

    Chaboyer, Brian C.; Chaboyer, Brian C.; Carney, Bruce W.; Latham, David W.; Dunca, Douglas; Grand, Terry; Layden, Andy; Sarajedini, Ataollah; McWilliam, Andrew; Shao, Michael

    2004-01-01

    The metal-poor stars in the halo of the Milky Way galaxy were among the first objects formed in our Galaxy. These Population II stars are the oldest objects in the universe whose ages can be accurately determined. Age determinations for these stars allow us to set a firm lower limit, to the age of the universe and to probe the early formation history of the Milky Way. The age of the universe determined from studies of Population II stars may be compared to the expansion age of the universe and used to constrain cosmological models. The largest uncertainty in estimates for the ages of stars in our halo is due to the uncertainty in the distance scale to Population II objects. We propose to obtain accurate parallaxes to a number of Population II objects (globular clusters and field stars in the halo) resulting in a significant improvement in the Population II distance scale and greatly reducing the uncertainty in the estimated ages of the oldest stars in our galaxy. At the present time, the oldest stars are estimated to be 12.8 Gyr old, with an uncertainty of approx. 15%. The SIM observations obtained by this key project, combined with the supporting theoretical research and ground based observations outlined in this proposal will reduce the estimated uncertainty in the age estimates to 5%).

  12. Gas Dynamics and Kinetics in the Cometary Coma: Theory and Observations

    NASA Technical Reports Server (NTRS)

    Combi, Michael R.; Harris, Walter M.; Smyth, William H.

    2005-01-01

    Our ability to describe the physical state of the expanding coma affects fundamental areas of cometary study both directly and indirectly. In order to convert measured abundances of gas species in the coma to gas production rates, models for the distribution and kinematics of gas species in the coma are required. Conversely, many different types of observations, together with laboratory data and theory, are still required to determine coma model attributes and parameters. Accurate relative and absolute gas production rates and their variations with time and from comet to comet are crucial to our basic understanding of the composition and structure of cometary nuclei and their place in the solar system. We review the gas dynamics and kinetics of cometary comae from both theoretical and observational perspectives, which are important for understanding the wide variety of physical conditions that are encountered.

  13. The Scintillation Prediction Observations Research Task (SPORT) Mission

    NASA Astrophysics Data System (ADS)

    Spann, J. F.; Swenson, C.; Durão, O.; Loures, L.; Heelis, R. A.; Bishop, R. L.; Le, G.; Abdu, M. A.; Habash Krause, L.; De Nardin, C. M.; Fonseca, E.

    2015-12-01

    Structure in the charged particle number density in the equatorial ionosphere can have a profound impact on the fidelity of HF, VHF and UHF radio signals that are used for ground-to-ground and space-to-ground communication and navigation. The degree to which such systems can be compromised depends in large part on the spatial distribution of the structured regions in the ionosphere and the background plasma density in which they are embedded. In order to address these challenges it is necessary to accurately distinguish the background ionospheric conditions that favor the generation of irregularities from those that do not. Additionally we must relate the evolution of those conditions to the subsequent evolution of the irregular plasma regions themselves. The background ionospheric conditions are conveniently described by latitudinal profiles of the plasma density at nearly constant altitude, which describe the effects of ExB drifts and neutral winds, while the appearance and growth of plasma structure requires committed observations from the ground from at least one fixed longitude. This talk will present an international collaborative CubeSat mission called SPORT that stands for Scintillation Prediction Observations Research Task. This mission that will advance our understanding of the nature and evolution of ionospheric structures around sunset to improve predictions of disturbances that affect radio propagation and telecommunication signals. The science goals will be accomplished by a unique combination of satellite observations from a nearly circular middle inclination orbit and the extensive operation of ground based observations from South America near the magnetic equator. This approach promises Explorer class science at a CubeSat price.

  14. The Scintillation Prediction Observations Research Task (SPORT) Mission

    NASA Astrophysics Data System (ADS)

    Spann, James; Le, Guan; Swenson, Charles; Denardini, Clezio Marcos; Bishop, Rebecca L.; Abdu, Mangalathayil A.; Cupertino Durao, Otavio S.; Heelis, Roderick; Loures, Luis; Krause, Linda; Fonseca, Eloi

    2016-07-01

    Structure in the charged particle number density in the equatorial ionosphere can have a profound impact on the fidelity of HF, VHF and UHF radio signals that are used for ground-to-ground and space-to-ground communication and navigation. The degree to which such systems can be compromised depends in large part on the spatial distribution of the structured regions in the ionosphere and the background plasma density in which they are embedded. In order to address these challenges it is necessary to accurately distinguish the background ionospheric conditions that favor the generation of irregularities from those that do not. Additionally we must relate the evolution of those conditions to the subsequent evolution of the irregular plasma regions themselves. The background ionospheric conditions are conveniently described by latitudinal profiles of the plasma density at nearly constant altitude, which describe the effects of ExB drifts and neutral winds, while the appearance and growth of plasma structure requires committed observations from the ground from at least one fixed longitude. This talk will present an international collaborative CubeSat mission called SPORT that stands for the Scintillation Prediction Observations Research Task. This mission will advance our understanding of the nature and evolution of ionospheric structures around sunset to improve predictions of disturbances that affect radio propagation and telecommunication signals. The science goals will be accomplished by a unique combination of satellite observations from a nearly circular middle inclination orbit and the extensive operation of ground based observations from South America near the magnetic equator. This approach promises Explorer class science at a CubeSat price.

  15. The Scintillation Prediction Observations Research Task (SPORT) Mission

    NASA Astrophysics Data System (ADS)

    Spann, James; Swenson, Charles; Durão, Otavio; Loures, Luis; Heelis, Rod; Bishop, Rebecca; Le, Guan; Abdu, Mangalathayil; Krause, Linda; Nardin, Clezio; Fonseca, Eloi

    2016-04-01

    Structure in the charged particle number density in the equatorial ionosphere can have a profound impact on the fidelity of HF, VHF and UHF radio signals that are used for ground-to-ground and space-to-ground communication and navigation. The degree to which such systems can be compromised depends in large part on the spatial distribution of the structured regions in the ionosphere and the background plasma density in which they are embedded. In order to address these challenges it is necessary to accurately distinguish the background ionospheric conditions that favor the generation of irregularities from those that do not. Additionally we must relate the evolution of those conditions to the subsequent evolution of the irregular plasma regions themselves. The background ionospheric conditions are conveniently described by latitudinal profiles of the plasma density at nearly constant altitude, which describe the effects of ExB drifts and neutral winds, while the appearance and growth of plasma structure requires committed observations from the ground from at least one fixed longitude. This talk will present an international collaborative CubeSat mission called SPORT that stands for the Scintillation Prediction Observations Research Task. This mission will advance our understanding of the nature and evolution of ionospheric structures around sunset to improve predictions of disturbances that affect radio propagation and telecommunication signals. The science goals will be accomplished by a unique combination of satellite observations from a nearly circular middle inclination orbit and the extensive operation of ground based observations from South America near the magnetic equator. This approach promises Explorer class science at a CubeSat price.

  16. Simple and accurate wavemeter implemented with a polarization interferometer.

    PubMed

    Dimmick, T E

    1997-12-20

    A simple and accurate wavemeter for measuring the wavelength of monochromatic light is described. The device uses the wavelength-dependent phase lag between principal polarization states of a length of birefringent material (retarder) as the basis for the measurement of the optical wavelength. The retarder is sandwiched between a polarizer and a polarizing beam splitter and is oriented such that its principal axes are 45 deg to the axis of the polarizer and the principal axes of the beam splitter. As a result of the disparity in propagation velocities between the principal polarization states of the retarder, the ratio of the optical power exiting the two ports of the polarizing beam splitter is wavelength dependent. If the input wavelength is known to be within a specified range, the measurement of the power ratio uniquely determines the input wavelength. The device offers the advantage of trading wavelength coverage for increased resolution simply through the choice of the retarder length. Implementations of the device employing both bulk-optic components and fiber-optic components are described, and the results of a laboratory test of a fiber-optic prototype are presented. The prototype had a wavelength accuracy of +/-0.03 nm.

  17. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  18. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  19. Integrating observational and modelling systems for the management of the Great Barrier Reef

    NASA Astrophysics Data System (ADS)

    Baird, M. E.; Jones, E. M.; Margvelashvili, N.; Mongin, M.; Rizwi, F.; Robson, B.; Schroeder, T.; Skerratt, J.; Steven, A. D.; Wild-Allen, K.

    2016-02-01

    Observational and modelling systems provide two sources of knowledge that must be combined to provide a more complete view than either observations or models alone can provide. Here we describe the eReefs coupled hydrodynamic, sediment and biogeochemical model that has been developed for the Great Barrier Reef; and the multiple observations that are used to constrain the model. Two contrasting examples of model - observational integration are highlighted. First we explore the carbon chemistry of the waters above the reef, for which observations are accurate, but expensive and therefore sparse, while model behaviour is highly skilful. For carbon chemistry, observations are used to constrain model parameterisation and quantify model error, with the model output itself providing the most useable knowledge for management purposes. In contrast, ocean colour provides inaccurate, but cheap and spatially and temporally extensive observations. Thus observations are best combined with the model in a data assimilating framework, where a custom-designed optical model has been developed for the purposes of incorporating ocean colour observations. The future management of Great Barrier Reef water quality will be based on an integration of observing and modelling systems, providing the most robust information available.

  20. Using GEO Optical Observations to Infer Orbit Populations

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Africano, John

    2002-01-01

    NASA's Orbital Debris measurements program has a goal to characterize the small debris environment in the geosynchronous Earth-orbit (GEO) region using optical telescopes ("small" refers to objects too small to catalog and track with current systems). Traditionally, observations of GEO and near-GEO objects involve following the object with the telescope long enough to obtain an orbit. When observing very dim objects with small field-of-view telescopes, though, the observations are generally too short to obtain accurate orbital elements. However, it is possible to use such observations to statistically characterize the small object environment. A telescope pointed at a particular spot could potentially see objects in a number of different orbits. Inevitably, when looking at one region for certain types of orbits, there are objects in other types of orbits that cannot be seen. Observation campaigns are designed with these limitations in mind and are set up to span a number of regions of the sky, making it possible to sample all potential orbits under consideration. Each orbit is not seen with the same probability, however, so there are observation biases intrinsic to any observation campaign. Fortunately, it is possible to remove such biases and reconstruct a meaningful estimate of the statistical orbit populations of small objects in GEO. This information, in turn, can be used to investigate the nature of debris sources and to characterize the risk to GEO spacecraft. This paper describes these statistical tools and presents estimates of small object GEO populations.

  1. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    NASA Astrophysics Data System (ADS)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  2. Can the three pore model correctly describe peritoneal transport of protein?

    PubMed

    Waniewski, Jacek; Poleszczuk, Jan; Antosiewicz, Stefan; Baczynński, Daniel; Gałach, Magda; Pietribiasi, Mauro; Wanńkowicz, Zofia

    2014-01-01

    The three pore model (3PM) includes large pores for the description of protein leak to the peritoneal cavity during peritoneal dialysis. However, the reliability of this description has been not fully tested against clinical data yet. Peritoneal transport parameters were estimated using 3PM, extended 3p model (with estimation of fraction of large pores, ext3PM), ext3PM with modified size of pores and proteins (mext3PM), and simplified two pore (2PM, small and ultrasmall pores) models for 32 patients on peritoneal dialysis investigated using the sequential peritoneal equilibration test (consecutive peritoneal equilibration test [PET]: glucose 2.27%, 4 h, and miniPET: glucose 3.86%, 1 h). Urea, creatinine, glucose, sodium, phosphate, albumin, and IgM concentrations were measured in dialysis fluid and plasma. Ext3PM and mext3PM, with large pore fraction of about 0.14, provided a good description of fluid and small solute kinetics, but their predictions for albumin transport were less accurate. Two pore model precisely described the data on fluid and small solute transport. The 3p models could not describe the diffusive-convective transport of albumin as precisely as the transport of fluid, small solutes, and IgM. The 2p model (not applicable for proteins) was an efficient tool for modeling fluid and small solute transport.

  3. Temperature dependent effective potential method for accurate free energy calculations of solids

    NASA Astrophysics Data System (ADS)

    Hellman, Olle; Steneteg, Peter; Abrikosov, I. A.; Simak, S. I.

    2013-03-01

    We have developed a thorough and accurate method of determining anharmonic free energies, the temperature dependent effective potential technique (TDEP). It is based on ab initio molecular dynamics followed by a mapping onto a model Hamiltonian that describes the lattice dynamics. The formalism and the numerical aspects of the technique are described in detail. A number of practical examples are given, and results are presented, which confirm the usefulness of TDEP within ab initio and classical molecular dynamics frameworks. In particular, we examine from first principles the behavior of force constants upon the dynamical stabilization of the body centered phase of Zr, and show that they become more localized. We also calculate the phase diagram for 4He modeled with the Aziz potential and obtain results which are in favorable agreement both with respect to experiment and established techniques.

  4. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  5. Kinetic determinations of accurate relative oxidation potentials of amines with reactive radical cations.

    PubMed

    Gould, Ian R; Wosinska, Zofia M; Farid, Samir

    2006-01-01

    Accurate oxidation potentials for organic compounds are critical for the evaluation of thermodynamic and kinetic properties of their radical cations. Except when using a specialized apparatus, electrochemical oxidation of molecules with reactive radical cations is usually an irreversible process, providing peak potentials, E(p), rather than thermodynamically meaningful oxidation potentials, E(ox). In a previous study on amines with radical cations that underwent rapid decarboxylation, we estimated E(ox) by correcting the E(p) from cyclic voltammetry with rate constants for decarboxylation obtained using laser flash photolysis. Here we use redox equilibration experiments to determine accurate relative oxidation potentials for the same amines. We also describe an extension of these experiments to show how relative oxidation potentials can be obtained in the absence of equilibrium, from a complete kinetic analysis of the reversible redox kinetics. The results provide support for the previous cyclic voltammetry/laser flash photolysis method for determining oxidation potentials.

  6. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... In the Matter of Accurate NDE & Docket: 150-00017, General Inspection, LLC Broussard, Louisiana... an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28, 2011, the NRC and Accurate NDE...

  7. An accurate metric for the spacetime around rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  8. How accurately can other people infer your thoughts-And does culture matter?

    PubMed

    Valanides, Constantinos; Sheppard, Elizabeth; Mitchell, Peter

    2017-01-01

    This research investigated how accurately people infer what others are thinking after observing a brief sample of their behaviour and whether culture/similarity is a relevant factor. Target participants (14 British and 14 Mediterraneans) were cued to think about either positive or negative events they had experienced. Subsequently, perceiver participants (16 British and 16 Mediterraneans) watched videos of the targets thinking about these things. Perceivers (both groups) were significantly accurate in judging when targets had been cued to think of something positive versus something negative, indicating notable inferential ability. Additionally, Mediterranean perceivers were better than British perceivers in making such inferences, irrespective of nationality of the targets, something that was statistically accounted for by corresponding group differences in levels of independently measured collectivism. The results point to the need for further research to investigate the possibility that being reared in a collectivist culture fosters ability in interpreting others' behaviour.

  9. How accurately can other people infer your thoughts—And does culture matter?

    PubMed Central

    Valanides, Constantinos; Sheppard, Elizabeth; Mitchell, Peter

    2017-01-01

    This research investigated how accurately people infer what others are thinking after observing a brief sample of their behaviour and whether culture/similarity is a relevant factor. Target participants (14 British and 14 Mediterraneans) were cued to think about either positive or negative events they had experienced. Subsequently, perceiver participants (16 British and 16 Mediterraneans) watched videos of the targets thinking about these things. Perceivers (both groups) were significantly accurate in judging when targets had been cued to think of something positive versus something negative, indicating notable inferential ability. Additionally, Mediterranean perceivers were better than British perceivers in making such inferences, irrespective of nationality of the targets, something that was statistically accounted for by corresponding group differences in levels of independently measured collectivism. The results point to the need for further research to investigate the possibility that being reared in a collectivist culture fosters ability in interpreting others’ behaviour. PMID:29112972

  10. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  11. Learning accurate and concise naïve Bayes classifiers from attribute value taxonomies and data

    PubMed Central

    Kang, D.-K.; Silvescu, A.; Honavar, V.

    2009-01-01

    In many application domains, there is a need for learning algorithms that can effectively exploit attribute value taxonomies (AVT)—hierarchical groupings of attribute values—to learn compact, comprehensible and accurate classifiers from data—including data that are partially specified. This paper describes AVT-NBL, a natural generalization of the naïve Bayes learner (NBL), for learning classifiers from AVT and data. Our experimental results show that AVT-NBL is able to generate classifiers that are substantially more compact and more accurate than those produced by NBL on a broad range of data sets with different percentages of partially specified values. We also show that AVT-NBL is more efficient in its use of training data: AVT-NBL produces classifiers that outperform those produced by NBL using substantially fewer training examples. PMID:20351793

  12. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    ERIC Educational Resources Information Center

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  13. Protostellar hydrodynamics: Constructing and testing a spacially and temporally second-order accurate method. 2: Cartesian coordinates

    NASA Technical Reports Server (NTRS)

    Myhill, Elizabeth A.; Boss, Alan P.

    1993-01-01

    In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.

  14. Accurate sub-millimetre rest frequencies for HOCO+ and DOCO+ ions

    NASA Astrophysics Data System (ADS)

    Bizzocchi, L.; Lattanzi, V.; Laas, J.; Spezzano, S.; Giuliano, B. M.; Prudenzano, D.; Endres, C.; Sipilä, O.; Caselli, P.

    2017-06-01

    Context. HOCO+ is a polar molecule that represents a useful proxy for its parent molecule CO2, which is not directly observable in the cold interstellar medium. This cation has been detected towards several lines of sight, including massive star forming regions, protostars, and cold cores. Despite the obvious astrochemical relevance, protonated CO2 and its deuterated variant, DOCO+, still lack an accurate spectroscopic characterisation. Aims: The aim of this work is to extend the study of the ground-state pure rotational spectra of HOCO+ and DOCO+ well into the sub-millimetre region. Methods: Ground-state transitions have been recorded in the laboratory using a frequency-modulation absorption spectrometer equipped with a free-space glow-discharge cell. The ions were produced in a low-density, magnetically confined plasma generated in a suitable gas mixture. The ground-state spectra of HOCO+ and DOCO+ have been investigated in the 213-967 GHz frequency range; 94 new rotational transitions have been detected. Additionally, 46 line positions taken from the literature have been accurately remeasured. Results: The newly measured lines have significantly enlarged the available data sets for HOCO+ and DOCO+, thus enabling the determination of highly accurate rotational and centrifugal distortion parameters. Our analysis shows that all HOCO+ lines with Ka ≥ 3 are perturbed by a ro-vibrational interaction that couples the ground state with the v5 = 1 vibrationally excited state. This resonance has been explicitly treated in the analysis in order to obtain molecular constants with clear physical meaning. Conclusions: The improved sets of spectroscopic parameters provide enhanced lists of very accurate sub-millimetre rest frequencies of HOCO+ and DOCO+ for astrophysical applications. These new data challenge a recent tentative identification of DOCO+ towards a pre-stellar core. Supplementary tables are only available at the CDS via anonymous ftp to http

  15. Accurate protein structure modeling using sparse NMR data and homologous structure information.

    PubMed

    Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David

    2012-06-19

    While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.

  16. Sulfur dioxide in the atmosphere of Venus 1 sounding rocket observations

    NASA Technical Reports Server (NTRS)

    Mcclintock, William E.; Barth, Charles A.; Kohnert, Richard A.

    1994-01-01

    In this paper we present ultraviolet reflectance spectra obtained during two sounding rocket observations of Venus made during September 1988 and March 1991. We describe the sensitivity of the derived reflectance to instrument calibration and show that significant artifacts can appear in that spectrum as a result of using separate instruments to observe both the planetary radiance and the solar irradiance. We show that sulfur dioxide is the primary spectral absorber in the 190 - 230 nm region and that the range of altitudes probed by these wavelengths is very sensitive to incidence and emission angles. In a following paper Na et. al. (1994) show that sulfur monoxide features are also present in these data. Accurate identification and measurement of additional species require observations in which both the planetary radiance and the solar irradiance are measured with the same instrument. The instrument used for these observations is uniquely suited for obtaining large phase angle coverage and for studying transient atmospheric events on Venus because it can observe targets within 18 deg of the sun while earth orbiting instruments are restricted to solar elongation angles greater than or equal to 45 deg.

  17. Media and Information Literacy (MIL) in journalistic learning: strategies for accurately engaging with information and reporting news

    NASA Astrophysics Data System (ADS)

    Inayatillah, F.

    2018-01-01

    In the era of digital technology, there is abundant information from various sources. This ease of access needs to be accompanied by the ability to engage with the information wisely. Thus, information and media literacy is required. From the results of preliminary observations, it was found that the students of Universitas Negeri Surabaya, whose major is Indonesian Literature, and they take journalistic course lack of the skill of media and information literacy (MIL). Therefore, they need to be equipped with MIL. The method used is descriptive qualitative, which includes data collection, data analysis, and presentation of data analysis. Observation and documentation techniques were used to obtain data of MIL’s impact on journalistic learning for students. This study aims at describing the important role of MIL for students of journalistic and its impact on journalistic learning for students of Indonesian literature batch 2014. The results of this research indicate that journalistic is a science that is essential for students because it affects how a person perceives news report. Through the reinforcement of the course, students can avoid a hoax. MIL-based journalistic learning makes students will be more skillful at absorbing, processing, and presenting information accurately. The subject influences students in engaging with information so that they can report news credibly.

  18. Accurate LC peak boundary detection for ¹⁶O/¹⁸O labeled LC-MS data.

    PubMed

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang S J; Zhang, Jianqiu Michelle

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements.

  19. Accurate registration of temporal CT images for pulmonary nodules detection

    NASA Astrophysics Data System (ADS)

    Yan, Jichao; Jiang, Luan; Li, Qiang

    2017-02-01

    Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.

  20. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  2. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  3. Bilateral weighted radiographs are required for accurate classification of acromioclavicular separation: an observational study of 59 cases.

    PubMed

    Ibrahim, E F; Forrest, N P; Forester, A

    2015-10-01

    Misinterpretation of the Rockwood classification system for acromioclavicular joint (ACJ) separations has resulted in a trend towards using unilateral radiographs for grading. Further, the use of weighted views to 'unmask' a grade III injury has fallen out of favour. Recent evidence suggests that many radiographic grade III injuries represent only a partial injury to the stabilising ligaments. This study aimed to determine (1) whether accurate classification is possible on unilateral radiographs and (2) the efficacy of weighted bilateral radiographs in unmasking higher-grade injuries. Complete bilateral non-weighted and weighted sets of radiographs for patients presenting with an acromioclavicular separation over a 10-year period were analysed retrospectively, and they were graded I-VI according to Rockwood's criteria. Comparison was made between grading based on (1) a single antero-posterior (AP) view of the injured side, (2) bilateral non-weighted views and (3) bilateral weighted views. Radiographic measurements for cases that changed grade after weighted views were statistically compared to see if this could have been predicted beforehand. Fifty-nine sets of radiographs on 59 patients (48 male, mean age of 33 years) were included. Compared with unilateral radiographs, non-weighted bilateral comparison films resulted in a grade change for 44 patients (74.5%). Twenty-eight of 56 patients initially graded as I, II or III were upgraded to grade V and two of three initial grade V patients were downgraded to grade III. The addition of a weighted view further upgraded 10 patients to grade V. No grade II injury was changed to grade III and no injury of any severity was downgraded by a weighted view. Grade III injuries upgraded on weighted views had a significantly greater baseline median percentage coracoclavicular distance increase than those that were not upgraded (80.7% vs. 55.4%, p=0.015). However, no cut-off point for this value could be identified to predict an

  4. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    NASA Astrophysics Data System (ADS)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-01

    We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.

  5. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  6. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE PAGES

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-27

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  7. A New CCI ECV Release (v2.0) to Accurately Measure the Sea Level Change from space (1993-2015)

    NASA Astrophysics Data System (ADS)

    Legeais, Jean-Francois; Benveniste, Jérôme

    2017-04-01

    Accurate monitoring of the sea level is required to better understand its variability and changes. Sea level is one of the Essential Climate Variables (ECV) selected in the frame of the ESA Climate Change Initiative (CCI) program. It aims at providing a long-term homogeneous and accurate sea level record. The needs and feedback of the climate research community have been collected so that the development of the products is adapted to the users. A first version of the sea level ECV product has been generated during phase I of the project (2011-2013). Within phase II (2014-2016), the 15 partner consortium has prepared the production of a new reprocessed homogeneous and accurate altimeter sea level record which is now available (see http://www.esa-sealevel-cci.org/products ). New level 2 altimeter standards developed and tested within the project as well as external contributions have been identified, processed and evaluated by comparison with a reference for different altimeter missions (TOPEX/Poseidon, Jason-1 & 2, ERS-1 & 2, Envisat, GFO, SARAL/AltiKa and CryoSat-2). The main evolutions are associated with the wet troposphere correction (based on the GPD+ algorithm including inter calibration with respect to external sensors) but also to the orbit solutions (POE-E and GFZ15), the ERA-Interim based atmospheric corrections and the FES2014 ocean tide model. A new pole tide solution is used and anomalies are referenced to the MSS DTU15. The presentation will focus on the main achievements of the ESA CCI Sea Level project and on the description of the new SL_cci ECV release covering 1993-2015. The major steps required to produce the reprocessed 23 year climate time series will be described. The impacts of the selected level 2 altimeter standards on the SL_cci ECV have been assessed on different spatial scales (global, regional, mesoscale) and temporal scales (long-term, inter-annual, periodic signals). A significant improvement is observed compared to the current v1

  8. Recursive analytical solution describing artificial satellite motion perturbed by an arbitrary number of zonal terms

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An analytical first order solution has been developed which describes the motion of an artificial satellite perturbed by an arbitrary number of zonal harmonics of the geopotential. A set of recursive relations for the solution, which was deduced from recursive relations of the geopotential, was derived. The method of solution is based on Von-Zeipel's technique applied to a canonical set of two-body elements in the extended phase space which incorporates the true anomaly as a canonical element. The elements are of Poincare type, that is, they are regular for vanishing eccentricities and inclinations. Numerical results show that this solution is accurate to within a few meters after 500 revolutions.

  9. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Persistent Homology to describe Solid and Fluid Structures during Multiphase Flow

    NASA Astrophysics Data System (ADS)

    Herring, A. L.; Robins, V.; Liu, Z.; Armstrong, R. T.; Sheppard, A.

    2017-12-01

    The question of how to accurately and effectively characterize essential fluid and solid distributions and structures is a long-standing topic within the field of porous media and fluid transport. For multiphase flow applications, considerable research effort has been made to describe fluid distributions under a range of conditions; including quantification of saturation levels, fluid-fluid pressure differences and interfacial areas, and fluid connectivity. Recent research has effectively used topological metrics to describe pore space and fluid connectivity, with researchers demonstrating links between pore-scale nonwetting phase topology to fluid mobilization and displacement mechanisms, relative permeability, fluid flow regimes, and thermodynamic models of multiphase flow. While topology is clearly a powerful tool to describe fluid distribution, topological metrics by definition provide information only on the connectivity of a phase, not its geometry (shape or size). Physical flow characteristics, e.g. the permeability of a fluid phase within a porous medium, are dependent on the connectivity of the pore space or fluid phase as well as the size of connections. Persistent homology is a technique which provides a direct link between topology and geometry via measurement of topological features and their persistence from the signed Euclidean distance transform of a segmented digital image (Figure 1). We apply persistent homology analysis to measure the occurrence and size of pore-scale topological features in a variety of sandstones, for both the dry state and the nonwetting phase fluid during two-phase fluid flow (drainage and imbibition) experiments, visualized with 3D X-ray microtomography. The results provide key insights into the dominant topological features and length scales of a media which control relevant field-scale engineering properties such as fluid trapping, absolute permeability, and relative permeability.

  11. Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1991-01-01

    Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.

  12. RXTE Observation of Cygnus X-1. Report 2; TIming Analysis

    NASA Technical Reports Server (NTRS)

    Nowak, Michael A.; Vaughan, Brian A.; Wilms, Joern; Dove, James B.; Begelman, Mitchell C.

    1998-01-01

    We present timing analysis for a Rossi X-ray Timing Explorer (RXTE) observation of Cygnus X-1 in its hard/low state. This was the first RXTE observation of Cyg X-1 taken after it transited back to this state from its soft/high state. RXTE's large effective area, superior timing capabilities, and ability to obtain long, uninterrupted observations have allowed us to obtain measurements of the power spectral density (PSD), coherence function, and Fourier time lags to a decade lower in frequency and half a decade higher in frequency than typically was achieved with previous instruments. Notable aspects of our observations include a weak 0.005 Hz feature in the PSD coincident with a coherence recovery; a 'hardening' of the high-frequency PSD with increasing energy; a broad frequency range measurement of the coherence function, revealing rollovers from unity coherence at both low and high frequency; and an accurate determination of the Fourier time lags over two and a half decades in frequency. As has been noted in previous similar observations, the time delay is approximately proportional to f(exp -0.7), and at a fixed Fourier frequency the time delay of the hard X-rays compared to the softest energy channel tends to increase logarithmically with energy. Curiously, the 0.01-0.2 Hz coherence between the highest and lowest energy bands is actually slightly greater than the coherence between the second highest and lowest energy bands. We carefully describe all of the analysis techniques used in this paper, and we make comparisons of the data to general theoretical expectations. In a companion paper, we make specific comparisons to a Compton corona model that we have successfully used to describe the energy spectral data from this observation.

  13. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our

  14. Towards First Principles-Based Prediction of Highly Accurate Electrochemical Pourbaix Diagrams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Zhenhua; Chan, Maria K. Y.; Zhao, Zhi-Jian

    2015-08-13

    Electrochemical potential/pH (Pourbaix) diagrams underpin many aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such diagrams, inherent errors in the description of transition metal (hydroxy)oxides, together with neglect of van der Waals interactions, have limited the reliability of such predictions for even the simplest pure metal bulk compounds, and corresponding predictions for more complex alloy or surface structures are even more challenging. In the present work, through synergistic use of a Hubbard U correction,more » a state-of-the-art dispersion correction, and a water-based bulk reference state for the calculations, these errors are systematically corrected. The approach describes the weak binding that occurs between hydroxyl-containing functional groups in certain compounds in Pourbaix diagrams, corrects for self-interaction errors in transition metal compounds, and reduces residual errors on oxygen atoms by preserving a consistent oxidation state between the reference state, water, and the relevant bulk phases. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxides, oxyhydroxides, binary, and ternary oxides, where the corresponding thermodynamics of redox and (de)hydration are described with standard errors of 0.04 eV per (reaction) formula unit. The approach further preserves accurate descriptions of the overall thermodynamics of electrochemically-relevant bulk reactions, such as water formation, which is an essential condition for facilitating accurate analysis of reaction energies for electrochemical processes on surfaces. The overall generality and transferability of the scheme suggests that it may find useful application in the construction of a broad array of electrochemical phase diagrams, including

  15. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  16. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    PubMed Central

    Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.

    2015-01-01

    Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results

  17. [Accurate 3D free-form registration between fan-beam CT and cone-beam CT].

    PubMed

    Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun

    2012-06-01

    Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.

  18. Accurate core position control in polymer optical waveguides using the Mosquito method for three-dimensional optical wiring

    NASA Astrophysics Data System (ADS)

    Date, Kumi; Ishigure, Takaaki

    2017-02-01

    Polymer optical waveguides with graded-index (GI) circular cores are fabricated using the Mosquito method, in which the positions of parallel cores are accurately controlled. Such an accurate arrangement is of great importance for a high optical coupling efficiency with other optical components such as fiber ribbons. In the Mosquito method that we developed, a core monomer with a viscous liquid state is dispensed into another liquid state monomer for cladding via a syringe needle. Hence, the core positions are likely to shift during or after the dispensing process due to several factors. We investigate the factors, specifically affecting the core height. When the core and cladding monomers are selected appropriately, the effect of the gravity could be negligible, so the core height is maintained uniform, resulting in accurate core heights. The height variance is controlled in +/-2 micrometers for the 12 cores. Meanwhile, larger shift in the core height is observed when the needle-tip position is apart from the substrate surface. One of the possible reasons of the needle-tip height dependence is the asymmetric volume contraction during the monomer curing. We find a linear relationship between the original needle-tip height and the core-height observed. This relationship is implemented in the needle-scan program to stabilize the core height in different layers. Finally, the core heights are accurately controlled even if the cores are aligned on various heights. These results indicate that the Mosquito method enables to fabricate waveguides in which the cores are 3-dimensionally aligned with a high position accuracy.

  19. Accurate potential drop sheet resistance measurements of laser-doped areas in semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinrich, Martin, E-mail: mh.seris@gmail.com; NUS Graduate School for Integrative Science and Engineering, National University of Singapore, Singapore 117456; Kluska, Sven

    2014-10-07

    It is investigated how potential drop sheet resistance measurements of areas formed by laser-assisted doping in crystalline Si wafers are affected by typically occurring experimental factors like sample size, inhomogeneities, surface roughness, or coatings. Measurements are obtained with a collinear four point probe setup and a modified transfer length measurement setup to measure sheet resistances of laser-doped lines. Inhomogeneities in doping depth are observed from scanning electron microscope images and electron beam induced current measurements. It is observed that influences from sample size, inhomogeneities, surface roughness, and coatings can be neglected if certain preconditions are met. Guidelines are given onmore » how to obtain accurate potential drop sheet resistance measurements on laser-doped regions.« less

  20. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  1. SPEX: a highly accurate spectropolarimeter for atmospheric aerosol characterization

    NASA Astrophysics Data System (ADS)

    Rietjens, J. H. H.; Smit, J. M.; di Noia, A.; Hasekamp, O. P.; van Harten, G.; Snik, F.; Keller, C. U.

    2017-11-01

    Global characterization of atmospheric aerosol in terms of the microphysical properties of the particles is essential for understanding the role aerosols in Earth climate [1]. For more accurate predictions of future climate the uncertainties of the net radiative forcing of aerosols in the Earth's atmosphere must be reduced [2]. Essential parameters that are needed as input in climate models are not only the aerosol optical thickness (AOT), but also particle specific properties such as the aerosol mean size, the single scattering albedo (SSA) and the complex refractive index. The latter can be used to discriminate between absorbing and non-absorbing aerosol types, and between natural and anthropogenic aerosol. Classification of aerosol types is also very important for air-quality and health-related issues [3]. Remote sensing from an orbiting satellite platform is the only way to globally characterize atmospheric aerosol at a relevant timescale of 1 day [4]. One of the few methods that can be employed for measuring the microphysical properties of aerosols is to observe both radiance and degree of linear polarization of sunlight scattered in the Earth atmosphere under different viewing directions [5][6][7]. The requirement on the absolute accuracy of the degree of linear polarization PL is very stringent: the absolute error in PL must be smaller then 0.001+0.005.PL in order to retrieve aerosol parameters with sufficient accuracy to advance climate modelling and to enable discrimination of aerosol types based on their refractive index for air-quality studies [6][7]. In this paper we present the SPEX instrument, which is a multi-angle spectropolarimeter that can comply with the polarimetric accuracy needed for characterizing aerosols in the Earth's atmosphere. We describe the implementation of spectral polarization modulation in a prototype instrument of SPEX and show results of ground based measurements from which aerosol microphysical properties are retrieved.

  2. Isolation of Candida auris from 9 patients in Central America: Importance of accurate diagnosis and susceptibility testing.

    PubMed

    Araúz, Ana Belen; Caceres, Diego H; Santiago, Erika; Armstrong, Paige; Arosemena, Susan; Ramos, Carolina; Espinosa-Bode, Andres; Borace, Jovanna; Hayer, Lizbeth; Cedeño, Israel; Jackson, Brendan R; Sosa, Nestor; Berkow, Elizabeth L; Lockhart, Shawn R; Rodriguez-French, Amalia; Chiller, Tom

    2018-01-01

    Candida auris is an emerging multidrug-resistant (MDR) fungus associated with invasive infections and high mortality. This report describes 9 patients from whom C. auris was isolated at a hospital in Panama City, Panama, the first such cases in Central America, and highlights the challenges of accurate identification and methods for susceptibility testing. © 2017 Blackwell Verlag GmbH.

  3. Masses of the components of SB2 binaries observed with Gaia - IV. Accurate SB2 orbits for 14 binaries and masses of three binaries*

    NASA Astrophysics Data System (ADS)

    Kiefer, F.; Halbwachs, J.-L.; Lebreton, Y.; Soubiran, C.; Arenou, F.; Pourbaix, D.; Famaey, B.; Guillout, P.; Ibata, R.; Mazeh, T.

    2018-02-01

    The orbital motion of non-contact double-lined spectroscopic binaries (SB2s), with periods of a few tens of days to several years, holds unique, accurate information on individual stellar masses, which only long-term monitoring can unlock. The combination of radial velocity measurements from high-resolution spectrographs and astrometric measurements from high-precision interferometers allows the derivation of SB2 component masses down to the percent precision. Since 2010, we have observed a large sample of SB2s with the SOPHIE spectrograph at the Observatoire de Haute-Provence, aiming at the derivation of orbital elements with sufficient accuracy to obtain masses of components with relative errors as low as 1 per cent when the astrometric measurements of the Gaia satellite are taken into account. In this paper, we present the results from 6 yr of observations of 14 SB2 systems with periods ranging from 33 to 4185 days. Using the TODMOR algorithm, we computed radial velocities from the spectra and then derived the orbital elements of these binary systems. The minimum masses of the 28 stellar components are then obtained with an average sample accuracy of 1.0 ± 0.2 per cent. Combining the radial velocities with existing interferometric measurements, we derived the masses of the primary and secondary components of HIP 61100, HIP 95995 and HIP 101382 with relative errors for components (A,B) of, respectively, (2.0, 1.7) per cent, (3.7, 3.7) per cent and (0.2, 0.1) per cent. Using the CESAM2K stellar evolution code, we constrained the initial He abundance, age and metallicity for HIP 61100 and HIP 95995.

  4. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  5. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  6. Advective transport observations with MODPATH-OBS--documentation of the MODPATH observation process

    USGS Publications Warehouse

    Hanson, R.T.; Kauffman, L.K.; Hill, M.C.; Dickinson, J.E.; Mehl, S.W.

    2013-01-01

    The MODPATH-OBS computer program described in this report is designed to calculate simulated equivalents for observations related to advective groundwater transport that can be represented in a quantitative way by using simulated particle-tracking data. The simulated equivalents supported by MODPATH-OBS are (1) distance from a source location at a defined time, or proximity to an observed location; (2) time of travel from an initial location to defined locations, areas, or volumes of the simulated system; (3) concentrations used to simulate groundwater age; and (4) percentages of water derived from contributing source areas. Although particle tracking only simulates the advective component of conservative transport, effects of non-conservative processes such as retardation can be approximated through manipulation of the effective-porosity value used to calculate velocity based on the properties of selected conservative tracers. This program can also account for simple decay or production, but it cannot account for diffusion. Dispersion can be represented through direct simulation of subsurface heterogeneity and the use of many particles. MODPATH-OBS acts as a postprocessor to MODPATH, so that the sequence of model runs generally required is MODFLOW, MODPATH, and MODPATH-OBS. The version of MODFLOW and MODPATH that support the version of MODPATH-OBS presented in this report are MODFLOW-2005 or MODFLOW-LGR, and MODPATH-LGR. MODFLOW-LGR is derived from MODFLOW-2005, MODPATH 5, and MODPATH 6 and supports local grid refinement. MODPATH-LGR is derived from MODPATH 5. It supports the forward and backward tracking of particles through locally refined grids and provides the output needed for MODPATH_OBS. For a single grid and no observations, MODPATH-LGR results are equivalent to MODPATH 5. MODPATH-LGR and MODPATH-OBS simulations can use nearly all of the capabilities of MODFLOW-2005 and MODFLOW-LGR; for example, simulations may be steady-state, transient, or a combination

  7. An algorithm to extract more accurate stream longitudinal profiles from unfilled DEMs

    NASA Astrophysics Data System (ADS)

    Byun, Jongmin; Seong, Yeong Bae

    2015-08-01

    Morphometric features observed from a stream longitudinal profile (SLP) reflect channel responses to lithological variation and changes in uplift or climate; therefore, they constitute essential indicators in the studies for the dynamics between tectonics, climate, and surface processes. The widespread availability of digital elevation models (DEMs) and their processing enable semi-automatic extraction of SLPs as well as additional stream profile parameters, thus reducing the time spent for extracting them and simultaneously allowing regional-scale studies of SLPs. However, careful consideration is required to extract SLPs directly from a DEM, because the DEM must be altered by depression filling process to ensure the continuity of flows across it. Such alteration inevitably introduces distortions to the SLP, such as stair steps, bias of elevation values, and inaccurate stream paths. This paper proposes a new algorithm, called maximum depth tracing algorithm (MDTA), to extract more accurate SLPs using depression-unfilled DEMs. The MDTA supposes that depressions in DEMs are not necessarily artifacts to be removed, and that elevation values within them are useful to represent more accurately the real landscape. To ensure the continuity of flows even across the unfilled DEM, the MDTA first determines the outlet of each depression and then reverses flow directions of the cells on the line of maximum depth within each depression, beginning from the outlet and toward the sink. It also calculates flow accumulation without disruption across the unfilled DEM. Comparative analysis with the profiles extracted by the hydrologic functions implemented in the ArcGIS™ was performed to illustrate the benefits from the MDTA. It shows that the MDTA provides more accurate stream paths on depression areas, and consequently reduces distortions of the SLPs derived from the paths, such as exaggerated elevation values and negatively biased slopes that are commonly observed in the SLPs

  8. Observing Clonal Dynamics across Spatiotemporal Axes: A Prelude to Quantitative Fitness Models for Cancer.

    PubMed

    McPherson, Andrew W; Chan, Fong Chun; Shah, Sohrab P

    2018-02-01

    The ability to accurately model evolutionary dynamics in cancer would allow for prediction of progression and response to therapy. As a prelude to quantitative understanding of evolutionary dynamics, researchers must gather observations of in vivo tumor evolution. High-throughput genome sequencing now provides the means to profile the mutational content of evolving tumor clones from patient biopsies. Together with the development of models of tumor evolution, reconstructing evolutionary histories of individual tumors generates hypotheses about the dynamics of evolution that produced the observed clones. In this review, we provide a brief overview of the concepts involved in predicting evolutionary histories, and provide a workflow based on bulk and targeted-genome sequencing. We then describe the application of this workflow to time series data obtained for transformed and progressed follicular lymphomas (FL), and contrast the observed evolutionary dynamics between these two subtypes. We next describe results from a spatial sampling study of high-grade serous (HGS) ovarian cancer, propose mechanisms of disease spread based on the observed clonal mixtures, and provide examples of diversification through subclonal acquisition of driver mutations and convergent evolution. Finally, we state implications of the techniques discussed in this review as a necessary but insufficient step on the path to predictive modelling of disease dynamics. Copyright © 2018 Cold Spring Harbor Laboratory Press; all rights reserved.

  9. Observations of accreting pulsars

    NASA Technical Reports Server (NTRS)

    Prince, Thomas A.; Bildsten, Lars; Chakrabarty, Deepto; Wilson, Robert B.; Finger, Mark H.

    1994-01-01

    We discuss recent observations of accreting binary pulsars with the all-sky BATSE instrument on the Compton Gamma Ray Observatory. BATSE has detected and studied nearly half of the known accreting pulsar systems. Continuous timing studies over a two-year period have yielded accurate orbital parameters for 9 of these systems, as well as new insights into long-term accretion torque histories.

  10. Using data mining techniques to characterize participation in observational studies.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    Data mining techniques are gaining in popularity among health researchers for an array of purposes, such as improving diagnostic accuracy, identifying high-risk patients and extracting concepts from unstructured data. In this paper, we describe how these techniques can be applied to another area in the health research domain: identifying characteristics of individuals who do and do not choose to participate in observational studies. In contrast to randomized studies where individuals have no control over their treatment assignment, participants in observational studies self-select into the treatment arm and therefore have the potential to differ in their characteristics from those who elect not to participate. These differences may explain part, or all, of the difference in the observed outcome, making it crucial to assess whether there is differential participation based on observed characteristics. As compared to traditional approaches to this assessment, data mining offers a more precise understanding of these differences. To describe and illustrate the application of data mining in this domain, we use data from a primary care-based medical home pilot programme and compare the performance of commonly used classification approaches - logistic regression, support vector machines, random forests and classification tree analysis (CTA) - in correctly classifying participants and non-participants. We find that CTA is substantially more accurate than the other models. Moreover, unlike the other models, CTA offers transparency in its computational approach, ease of interpretation via the decision rules produced and provides statistical results familiar to health researchers. Beyond their application to research, data mining techniques could help administrators to identify new candidates for participation who may most benefit from the intervention. © 2016 John Wiley & Sons, Ltd.

  11. Highly accurate pulse-per-second timing distribution over optical fibre network using VCSEL side-mode injection

    NASA Astrophysics Data System (ADS)

    Wassin, Shukree; Isoe, George M.; Gamatham, Romeo R. G.; Leitch, Andrew W. R.; Gibbon, Tim B.

    2017-01-01

    Precise and accurate timing signals distributed between a centralized location and several end-users are widely used in both metro-access and speciality networks for Coordinated Universal Time (UTC), GPS satellite systems, banking, very long baseline interferometry and science projects such as SKA radio telescope. Such systems utilize time and frequency technology to ensure phase coherence among data signals distributed across an optical fibre network. For accurate timing requirements, precise time intervals should be measured between successive pulses. In this paper we describe a novel, all optical method for quantifying one-way propagation times and phase perturbations in the fibre length, using pulse-persecond (PPS) signals. The approach utilizes side mode injection of a 1550nm 10Gbps vertical cavity surface emitting laser (VCSEL) at the remote end. A 125 μs one-way time of flight was accurately measured for 25 km G655 fibre. Since the approach is all-optical, it avoids measurement inaccuracies introduced by electro-optical conversion phase delays. Furthermore, the implementation uses cost effective VCSEL technology and suited to a flexible range of network architectures, supporting a number of end-users conducting measurements at the remote end.

  12. Multiple-frequency continuous wave ultrasonic system for accurate distance measurement

    NASA Astrophysics Data System (ADS)

    Huang, C. F.; Young, M. S.; Li, Y. C.

    1999-02-01

    A highly accurate multiple-frequency continuous wave ultrasonic range-measuring system for use in air is described. The proposed system uses a method heretofore applied to radio frequency distance measurement but not to air-based ultrasonic systems. The method presented here is based upon the comparative phase shifts generated by three continuous ultrasonic waves of different but closely spaced frequencies. In the test embodiment to confirm concept feasibility, two low cost 40 kHz ultrasonic transducers are set face to face and used to transmit and receive ultrasound. Individual frequencies are transmitted serially, each generating its own phase shift. For any given frequency, the transmitter/receiver distance modulates the phase shift between the transmitted and received signals. Comparison of the phase shifts allows a highly accurate evaluation of target distance. A single-chip microcomputer-based multiple-frequency continuous wave generator and phase detector was designed to record and compute the phase shift information and the resulting distance, which is then sent to either a LCD or a PC. The PC is necessary only for calibration of the system, which can be run independently after calibration. Experiments were conducted to test the performance of the whole system. Experimentally, ranging accuracy was found to be within ±0.05 mm, with a range of over 1.5 m. The main advantages of this ultrasonic range measurement system are high resolution, low cost, narrow bandwidth requirements, and ease of implementation.

  13. Accurate and scalable social recommendation using mixed-membership stochastic block models.

    PubMed

    Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta

    2016-12-13

    With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.

  14. Accurate and scalable social recommendation using mixed-membership stochastic block models

    PubMed Central

    Godoy-Lorite, Antonia; Moore, Cristopher

    2016-01-01

    With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773

  15. Accurate Ray-tracing of Realistic Neutron Star Atmospheres for Constraining Their Parameters

    NASA Astrophysics Data System (ADS)

    Vincent, Frederic H.; Bejger, Michał; Różańska, Agata; Straub, Odele; Paumard, Thibaut; Fortin, Morgane; Madej, Jerzy; Majczyna, Agnieszka; Gourgoulhon, Eric; Haensel, Paweł; Zdunik, Leszek; Beldycki, Bartosz

    2018-03-01

    Thermal-dominated X-ray spectra of neutron stars in quiescent, transient X-ray binaries and neutron stars that undergo thermonuclear bursts are sensitive to mass and radius. The mass–radius relation of neutron stars depends on the equation of state (EoS) that governs their interior. Constraining this relation accurately is therefore of fundamental importance to understand the nature of dense matter. In this context, we introduce a pipeline to calculate realistic model spectra of rotating neutron stars with hydrogen and helium atmospheres. An arbitrarily fast-rotating neutron star with a given EoS generates the spacetime in which the atmosphere emits radiation. We use the LORENE/NROTSTAR code to compute the spacetime numerically and the ATM24 code to solve the radiative transfer equations self-consistently. Emerging specific intensity spectra are then ray-traced through the neutron star’s spacetime from the atmosphere to a distant observer with the GYOTO code. Here, we present and test our fully relativistic numerical pipeline. To discuss and illustrate the importance of realistic atmosphere models, we compare our model spectra to simpler models like the commonly used isotropic color-corrected blackbody emission. We highlight the importance of considering realistic model-atmosphere spectra together with relativistic ray-tracing to obtain accurate predictions. We also insist upon the crucial impact of the star’s rotation on the observables. Finally, we close a controversy that has been ongoing in the literature in the recent years, regarding the validity of the ATM24 code.

  16. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  17. Dynamic sensing model for accurate delectability of environmental phenomena using event wireless sensor network

    NASA Astrophysics Data System (ADS)

    Missif, Lial Raja; Kadhum, Mohammad M.

    2017-09-01

    Wireless Sensor Network (WSN) has been widely used for monitoring where sensors are deployed to operate independently to sense abnormal phenomena. Most of the proposed environmental monitoring systems are designed based on a predetermined sensing range which does not reflect the sensor reliability, event characteristics, and the environment conditions. Measuring of the capability of a sensor node to accurately detect an event within a sensing field is of great important for monitoring applications. This paper presents an efficient mechanism for even detection based on probabilistic sensing model. Different models have been presented theoretically in this paper to examine their adaptability and applicability to the real environment applications. The numerical results of the experimental evaluation have showed that the probabilistic sensing model provides accurate observation and delectability of an event, and it can be utilized for different environment scenarios.

  18. An Accurate Absorption-Based Net Primary Production Model for the Global Ocean

    NASA Astrophysics Data System (ADS)

    Silsbe, G.; Westberry, T. K.; Behrenfeld, M. J.; Halsey, K.; Milligan, A.

    2016-02-01

    As a vital living link in the global carbon cycle, understanding how net primary production (NPP) varies through space, time, and across climatic oscillations (e.g. ENSO) is a key objective in oceanographic research. The continual improvement of ocean observing satellites and data analytics now present greater opportunities for advanced understanding and characterization of the factors regulating NPP. In particular, the emergence of spectral inversion algorithms now permits accurate retrievals of the phytoplankton absorption coefficient (aΦ) from space. As NPP is the efficiency in which absorbed energy is converted into carbon biomass, aΦ measurements circumvents chlorophyll-based empirical approaches by permitting direct and accurate measurements of phytoplankton energy absorption. It has long been recognized, and perhaps underappreciated, that NPP and phytoplankton growth rates display muted variability when normalized to aΦ rather than chlorophyll. Here we present a novel absorption-based NPP model that parameterizes the underlying physiological mechanisms behind this muted variability, and apply this physiological model to the global ocean. Through a comparison against field data from the Hawaii and Bermuda Ocean Time Series, we demonstrate how this approach yields more accurate NPP measurements than other published NPP models. By normalizing NPP to satellite estimates of phytoplankton carbon biomass, this presentation also explores the seasonality of phytoplankton growth rates across several oceanic regions. Finally, we discuss how future advances in remote-sensing (e.g. hyperspectral satellites, LIDAR, autonomous profilers) can be exploited to further improve absorption-based NPP models.

  19. Identifying and Describing Tutor Archetypes: The Pragmatist, the Architect, and the Surveyor

    ERIC Educational Resources Information Center

    Harootunian, Jeff A.; Quinn, Robert J.

    2008-01-01

    In this article, the authors identify and anecdotally describe three tutor archetypes: the pragmatist, the architect, and the surveyor. These descriptions, based on observations of remedial mathematics tutors at a land-grant university, shed light on a variety of philosophical beliefs regarding and pedagogical approaches to tutoring. An analysis…

  20. Systematically describing gross lesions in corals

    USGS Publications Warehouse

    Work, Thierry M.; Aeby, Greta S.

    2006-01-01

    Many coral diseases are characterized based on gross descriptions and, given the lack or difficulty of applying existing laboratory tools to understanding causes of coral diseases, most new diseases will continued to be described based on appearance in the field. Unfortunately, many existing descriptions of coral disease are ambiguous or open to subjective interpretation, making comparisons between oceans problematic. One reason for this is that the process of describing lesions is often confused with that of assigning causality for the lesion. However, causality is usually something not obtained in the field and requires additional laboratory tests. Because a concise and objective morphologic description provides the foundation for a case definition of any disease, there is a need for a consistent and standardized process to describe lesions of corals that focuses on morphology. We provide a framework to systematically describe and name diseases in corals involving 4 steps: (1) naming the disease, (2) describing the lesion, (3) formulating a morphologic diagnosis and (4) formulating an etiologic diagnosis. This process focuses field investigators on describing what they see and separates the process of describing a lesion from that of inferring causality, the latter being more appropriately done using laboratory techniques.

  1. Development of anatomically and dielectrically accurate breast phantoms for microwave imaging applications

    NASA Astrophysics Data System (ADS)

    O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.

    2014-05-01

    Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.

  2. Accurate spectroscopic characterization of oxirane: A valuable route to its identification in Titan's atmosphere and the assignment of unidentified infrared bands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puzzarini, Cristina; Biczysko, Malgorzata; Bloino, Julien

    2014-04-20

    In an effort to provide an accurate spectroscopic characterization of oxirane, state-of-the-art computational methods and approaches have been employed to determine highly accurate fundamental vibrational frequencies and rotational parameters. Available experimental data were used to assess the reliability of our computations, and an accuracy on average of 10 cm{sup –1} for fundamental transitions as well as overtones and combination bands has been pointed out. Moving to rotational spectroscopy, relative discrepancies of 0.1%, 2%-3%, and 3%-4% were observed for rotational, quartic, and sextic centrifugal-distortion constants, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be usefulmore » for identification of oxirane in Titan's atmosphere and the assignment of unidentified infrared bands. Since oxirane was already observed in the interstellar medium and some astronomical objects are characterized by very high D/H ratios, we also considered the accurate determination of the spectroscopic parameters for the mono-deuterated species, oxirane-d1. For the latter, an empirical scaling procedure allowed us to improve our computed data and to provide predictions for rotational transitions with a relative accuracy of about 0.02% (i.e., an uncertainty of about 40 MHz for a transition lying at 200 GHz).« less

  3. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  4. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  5. Accurate mass measurement: terminology and treatment of data.

    PubMed

    Brenton, A Gareth; Godfrey, A Ruth

    2010-11-01

    High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.

  6. Accurate integration over atomic regions bounded by zero-flux surfaces.

    PubMed

    Polestshuk, Pavel M

    2013-01-30

    The approach for the integration over a region covered by zero-flux surface is described. This approach based on the surface triangulation technique is efficiently realized in a newly developed program TWOE. The elaborated method is tested on several atomic properties including the source function. TWOE results are compared with those produced by using well-known existing programs. Absolute errors in computed atomic properties are shown to range usually from 10(-6) to 10(-5) au. The demonstrative examples prove that present realization has perfect convergence of atomic properties with increasing size of angular grid and allows to obtain highly accurate data even in the most difficult cases. It is believed that the developed program can be bridgehead that allows to implement atomic partitioning of any desired molecular property with high accuracy. Copyright © 2012 Wiley Periodicals, Inc.

  7. Accurate, Streamlined Analysis of mRNA Translation by Sucrose Gradient Fractionation

    PubMed Central

    Aboulhouda, Soufiane; Di Santo, Rachael; Therizols, Gabriel; Weinberg, David

    2017-01-01

    The efficiency with which proteins are produced from mRNA molecules can vary widely across transcripts, cell types, and cellular states. Methods that accurately assay the translational efficiency of mRNAs are critical to gaining a mechanistic understanding of post-transcriptional gene regulation. One way to measure translational efficiency is to determine the number of ribosomes associated with an mRNA molecule, normalized to the length of the coding sequence. The primary method for this analysis of individual mRNAs is sucrose gradient fractionation, which physically separates mRNAs based on the number of bound ribosomes. Here, we describe a streamlined protocol for accurate analysis of mRNA association with ribosomes. Compared to previous protocols, our method incorporates internal controls and improved buffer conditions that together reduce artifacts caused by non-specific mRNA–ribosome interactions. Moreover, our direct-from-fraction qRT-PCR protocol eliminates the need for RNA purification from gradient fractions, which greatly reduces the amount of hands-on time required and facilitates parallel analysis of multiple conditions or gene targets. Additionally, no phenol waste is generated during the procedure. We initially developed the protocol to investigate the translationally repressed state of the HAC1 mRNA in S. cerevisiae, but we also detail adapted procedures for mammalian cell lines and tissues. PMID:29170751

  8. An accurate bacterial DNA quantification assay for HTS library preparation of human biological samples.

    PubMed

    Seashols-Williams, Sarah; Green, Raquel; Wohlfahrt, Denise; Brand, Angela; Tan-Torres, Antonio Limjuco; Nogales, Francy; Brooks, J Paul; Singh, Baneshwar

    2018-05-17

    Sequencing and classification of microbial taxa within forensically relevant biological fluids has the potential for applications in the forensic science and biomedical fields. The quantity of bacterial DNA from human samples is currently estimated based on quantity of total DNA isolated. This method can miscalculate bacterial DNA quantity due to the mixed nature of the sample, and consequently library preparation is often unreliable. We developed an assay that can accurately and specifically quantify bacterial DNA within a mixed sample for reliable 16S ribosomal DNA (16S rDNA) library preparation and high throughput sequencing (HTS). A qPCR method was optimized using universal 16S rDNA primers, and a commercially available bacterial community DNA standard was used to develop a precise standard curve. Following qPCR optimization, 16S rDNA libraries from saliva, vaginal and menstrual secretions, urine, and fecal matter were amplified and evaluated at various DNA concentrations; successful HTS data were generated with as low as 20 pg of bacterial DNA. Changes in bacterial DNA quantity did not impact observed relative abundances of major bacterial taxa, but relative abundance changes of minor taxa were observed. Accurate quantification of microbial DNA resulted in consistent, successful library preparations for HTS analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Accurate seismic phase identification and arrival time picking of glacial icequakes

    NASA Astrophysics Data System (ADS)

    Jones, G. A.; Doyle, S. H.; Dow, C.; Kulessa, B.; Hubbard, A.

    2010-12-01

    A catastrophic lake drainage event was monitored continuously using an array of 6, 4.5 Hz 3 component geophones in the Russell Glacier catchment, Western Greenland. Many thousands of events and arrival time phases (e.g., P- or S-wave) were recorded, often with events occurring simultaneously but at different locations. In addition, different styles of seismic events were identified from 'classical' tectonic earthquakes to tremors usually observed in volcanic regions. The presence of such a diverse and large dataset provides insight into the complex system of lake drainage. One of the most fundamental steps in seismology is the accurate identification of a seismic event and its associated arrival times. However, the collection of such a large and complex dataset makes the manual identification of a seismic event and picking of the arrival time phases time consuming with variable results. To overcome the issues of consistency and manpower, a number of different methods have been developed including short-term and long-term averages, spectrograms, wavelets, polarisation analyses, higher order statistics and auto-regressive techniques. Here we propose an automated procedure which establishes the phase type and accurately determines the arrival times. The procedure combines a number of different automated methods to achieve this, and is applied to the recently acquired lake drainage data. Accurate identification of events and their arrival time phases are the first steps in gaining a greater understanding of the extent of the deformation and the mechanism of such drainage events. A good knowledge of the propagation pathway of lake drainage meltwater through a glacier will have significant consequences for interpretation of glacial and ice sheet dynamics.

  10. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  11. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  12. Arctic Observing Experiment (AOX) Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rigor, Ignatius; Johnson, Jim; Motz, Emily

    Our ability to understand and predict weather and climate requires an accurate observing network. One of the pillars of this network is the observation of the fundamental meteorological parameters: temperature, air pressure, and wind. We plan to assess our ability to measure these parameters for the polar regions during the Arctic Observing Experiment (AOX, Figure 1) to support the International Arctic Buoy Programme (IABP), Arctic Observing Network (AON), International Program for Antarctic Buoys (IPAB), and Southern Ocean Observing System (SOOS). Accurate temperature measurements are also necessary to validate and improve satellite measurements of surface temperature across the Arctic. Support formore » research associated with the campaign is provided by the National Science Foundation, and by other US agencies contributing to the US Interagency Arctic Buoy Program. In addition to the support provided by the U.S Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s North Slope of Alaska (NSA) site at Barrow and the National Science Foundation (NSF), the U.S. IABP is supported by the U.S. Coast Guard (USCG), the National Aeronautics and Space Administration (NASA), the National Ice Center (NIC), the National Oceanic and Atmospheric Administration (NOAA), and the Office of Naval Research (ONR).« less

  13. Development of accurate potentials to explore the structure of water on 2D materials

    NASA Astrophysics Data System (ADS)

    Bejagam, Karteek; Singh, Samrendra; Deshmukh, Sanket; Deshmkuh Group Team; Samrendra Group Collaboration

    Water play an important role in many biological and non-biological process. Thus structure of water at various interfaces and under confinement has always been the topic of immense interest. 2-D materials have shown great potential in surface coating applications and nanofluidic devices. However, the exact atomic level understanding of the wettability of single layer of these 2-D materials is still lacking mainly due to lack of experimental techniques and computational methodologies including accurate force-field potentials and algorithms to measure the contact angle of water. In the present study, we have developed a new algorithm to measure the accurate contact angle between water and 2-D materials. The algorithm is based on fitting the best sphere to the shape of the droplet. This novel spherical fitting method accounts for every individual molecule of the droplet, rather than those at the surface only. We employ this method of contact angle measurements to develop the accurate non-bonded potentials between water and 2-D materials including graphene and boron nitride (BN) to reproduce the experimentally observed contact angle of water on these 2-D materials. Different water models such as SPC, SPC/Fw, and TIP3P were used to study the structure of water at the interfaces.

  14. Simultaneous Position, Velocity, Attitude, Angular Rates, and Surface Parameter Estimation Using Astrometric and Photometric Observations

    DTIC Science & Technology

    2013-07-01

    Additionally, a physically consistent BRDF and radiation pressure model is utilized thus enabling an accurate physical link between the observed... BRDF and radiation pressure model is utilized thus enabling an accurate physical link between the observed photometric brightness and the attitudinal...source and the observer is ( ) VLVLH ˆˆˆˆˆ ++= (2) with angles α and β from N̂ and is used in many analytic BRDF models . There are many

  15. The Scientific and Societal Need for Accurate Global Remote Sensing of Marine Suspended Sediments

    NASA Technical Reports Server (NTRS)

    Acker, James G.

    2006-01-01

    Population pressure, commercial development, and climate change are expected to cause continuing alteration of the vital oceanic coastal zone environment. These pressures will influence both the geology and biology of the littoral, nearshore, and continental shelf regions. A pressing need for global observation of coastal change processes is an accurate remotely-sensed data product for marine suspended sediments. The concentration, delivery, transport, and deposition of sediments is strongly relevant to coastal primary production, inland and coastal hydrology, coastal erosion, and loss of fragile wetland and island habitats. Sediment transport and deposition is also related to anthropogenic activities including agriculture, fisheries, aquaculture, harbor and port commerce, and military operations. Because accurate estimation of marine suspended sediment concentrations requires advanced ocean optical analysis, a focused collaborative program of algorithm development and assessment is recommended, following the successful experience of data refinement for remotely-sensed global ocean chlorophyll concentrations.

  16. Observing Animal Behavior at the Zoo: A Learning Laboratory

    ERIC Educational Resources Information Center

    Hull, Debra B.

    2003-01-01

    Undergraduate students in a learning laboratory course initially chose a species to study; researched that species' physical and behavioral characteristics; then learned skills necessary to select, operationalize, observe, and record animal behavior accurately. After their classroom preparation, students went to a local zoo to observe the behavior…

  17. In situ Observations of Heliospheric Current Sheets Evolution

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Peng, Jun; Huang, Jia; Klecker, Berndt

    2017-04-01

    We investigate the Heliospheric current sheet observation time difference of the spacecraft using the STEREO, ACE and WIND data. The observations are first compared to a simple theory in which the time difference is only determined by the radial and longitudinal separation between the spacecraft. The predictions fit well with the observations except for a few events. Then the time delay caused by the latitudinal separation is taken in consideration. The latitude of each spacecraft is calculated based on the PFSS model assuming that heliospheric current sheets propagate at the solar wind speed without changing their shapes from the origin to spacecraft near 1AU. However, including the latitudinal effects does not improve the prediction, possibly because that the PFSS model may not locate the current sheets accurately enough. A new latitudinal delay is predicted based on the time delay using the observations on ACE data. The new method improved the prediction on the time lag between spacecraft; however, further study is needed to predict the location of the heliospheric current sheet more accurately.

  18. Optimal strategies for throwing accurately

    NASA Astrophysics Data System (ADS)

    Venkadesan, M.; Mahadevan, L.

    2017-04-01

    The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed-accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error.

  19. Validation and deployment of the first Lidar based weather observation network in New York State: The NYS MesoNet Project

    NASA Astrophysics Data System (ADS)

    Thobois, L.; Freedman, J.; Royer, P.; Brotzge, J.; Joseph, E.

    2018-04-01

    The number and quality of atmospheric observations used by meteorologists and operational forecasters are increasing year after year, and yet, consistent improvements in forecast skill remains a challenge. While contributing factors involving these challenges have been identified, including the difficulty in accurately establishing initial conditions, improving the observations at regional and local scales is necessary for accurate depiction of the atmospheric boundary layer (below 2km), particularly the wind profile, in high resolution numerical models. Above the uncertainty of weather forecasts, the goal is also to improve the detection of severe and extreme weather events (severe thunderstorms, tornadoes and other mesoscale phenomena) that can adversely affect life, property and commerce, primarily in densely populated urban centers. This paper will describe the New York State Mesonet that is being deployed in the state of New York, USA. It is composed of 126 stations including 17 profiler sites. These sites will acquire continuous upper air observations through the combination of WINDCUBE Lidars and microwave radiometers. These stations will provide temperature, relative humidity & "3D" wind profile measurements through and above the planetary boundary layer (PBL) and will retrieve derived atmospheric quantities such as the PBL height, cloud base, momentum fluxes, and aerosol & cloud optical properties. The different modes and configurations that will be used for the Lidars are discussed. The performances in terms of data availability and wind accuracy and precision are evaluated. Several profiles with specific wind and aerosol features are presented to illustrate the benefits of the use of Coherent Doppler Lidars to monitor accurately the PBL.

  20. Calibrating GPS With TWSTFT For Accurate Time Transfer

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  1. A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, J.-S.; Chen, K.-H.; Choi, Y.

    1992-01-01

    A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.

  2. Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?

    PubMed

    Hey, Spencer Phillips; Kimmelman, Jonathan

    2016-10-01

    Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach. © 2016 John Wiley & Sons Ltd.

  3. Item Response Theory as an Efficient Tool to Describe a Heterogeneous Clinical Rating Scale in De Novo Idiopathic Parkinson's Disease Patients.

    PubMed

    Buatois, Simon; Retout, Sylvie; Frey, Nicolas; Ueckert, Sebastian

    2017-10-01

    This manuscript aims to precisely describe the natural disease progression of Parkinson's disease (PD) patients and evaluate approaches to increase the drug effect detection power. An item response theory (IRT) longitudinal model was built to describe the natural disease progression of 423 de novo PD patients followed during 48 months while taking into account the heterogeneous nature of the MDS-UPDRS. Clinical trial simulations were then used to compare drug effect detection power from IRT and sum of item scores based analysis under different analysis endpoints and drug effects. The IRT longitudinal model accurately describes the evolution of patients with and without PD medications while estimating different progression rates for the subscales. When comparing analysis methods, the IRT-based one consistently provided the highest power. IRT is a powerful tool which enables to capture the heterogeneous nature of the MDS-UPDRS.

  4. New hairworm (Nematomorpha, Gordiida) species described from the Arizona Madrean Sky Islands.

    PubMed

    Swanteson-Franz, Rachel J; Marquez, Destinie A; Goldstein, Craig I; Andreas Schmidt-Rhaesa; Bolek, Matthew G; Hanelt, Ben

    2018-01-01

    Gordiids, or freshwater hairworms, are members of the phylum Nematomorpha that use terrestrial definitive hosts (arthropods) and live as adults in rivers, lakes, or streams. The genus Paragordius consists of 18 species, one of which was described from the Nearctic in 1851. More than 150 years later, we are describing a second Paragordius species from a unique habitat within the Nearctic; the Madrean Sky Island complex. The Madrean Sky Islands are a series of isolated high mountains in northern Mexico and the southwestern United States (Arizona and New Mexico), and are well known for their high diversity and endemicity. The new species is described based on both molecular data (COI barcoding) and morphological characters of the eggs, larvae, cysts, and adults. Adult females have unique small oblong mounds present on the interior of the trifurcating lobes with randomly dispersed long hairs extending from the furrows between the mounds. Marked genetic differences support observed morphological differences. This species represents the second new hairworm to be described from the Madrean Sky Islands, and it may represent the first endemic hairworm from this biodiversity hotspot.

  5. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6

  6. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to

  7. ICE-COLA: fast simulations for weak lensing observables

    NASA Astrophysics Data System (ADS)

    Izard, Albert; Fosalba, Pablo; Crocce, Martin

    2018-01-01

    Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.

  8. Analysis of the GOES 6.7 micrometer channel observations during FIRE 2

    NASA Technical Reports Server (NTRS)

    Soden, B. J.; Ackerman, S. A.; Starr, David

    1993-01-01

    Clouds form in moist environments. FIRE Phase II Cirrus Implementation Plan (August, 1990) noted the need for mesoscale measurements of upper tropospheric water vapor content. These measurements are needed for initializing and verifying numerical weather prediction models and for describing the environment in which cirrus clouds develop and dissipate. Various instruments where deployed to measure the water vapor amounts of the upper troposphere during FIRE II (e.g. Raman lidar, CLASS sonds and new cryogenic frost hygrometer on-board aircraft). The formation, maintenance and dissipation of cirrus clouds involve the time variation of the water budget of the upper troposphere. The GOES 6.7 mu m radiance observations are sensitive to the upper tropospheric relative humidity, and therefore proved extremely valuable in planning aircraft missions during the field phase of FIRE II. Warm 6.7 mu m equivalent black body temperatures indicate a relatively dry upper troposphere and were associated with regions generally free of cirrus clouds. Regions that were colder, implying more moisture was available may or may not have had cirrus clouds present. Animation of a time sequence of 6.7 mu m images was particularly useful in planning various FIRE missions. The 6.7 mu m observations can also be very valuable in the verification of model simulations and describing the upper tropospheric synoptic conditions. A quantitative analysis of the 6.7 mu m measurement is required to successfully incorporate these satellite observations into describing the upper tropospheric water vapor budget. Recently, Soden and Bretherton (1993) have proposed a method of deriving an upper tropospheric humidity based on observations from the GOES 6.7 mu m observations. The method is summarized in the next section. In their paper they compare their retrieval method to radiance simulations. Observations were also compared to ECMWF model output to assess the model performance. The FIRE experiment provides a

  9. Egnos-Based Multi-Sensor Accurate and Reliable Navigation in Search-And Missions with Uavs

    NASA Astrophysics Data System (ADS)

    Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Stebler, Y.; Skaloud, J.; Kornus, W.; Prades, R.

    2011-09-01

    This paper will introduce and describe the goals, concept and overall approach of the European 7th Framework Programme's project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost SAR operations'. The goal of CLOSE-SEARCH is to integrate in a helicopter-type unmanned aerial vehicle, a thermal imaging sensor and a multi-sensor navigation system (based on the use of a Barometric Altimeter (BA), a Magnetometer (MAGN), a Redundant Inertial Navigation System (RINS) and an EGNOS-enabled GNSS receiver) with an Autonomous Integrity Monitoring (AIM) capability, to support the search component of Search-And-Rescue operations in remote, difficult-to-access areas and/or in time critical situations. The proposed integration will result in a hardware and software prototype that will demonstrate an end-to-end functionality, that is to fly in patterns over a region of interest (possibly inaccessible) during day or night and also under adverse weather conditions and locate there disaster survivors or lost people through the detection of the body heat. This paper will identify the technical challenges of the proposed approach, from navigating with a BA/MAGN/RINS/GNSS-EGNOSbased integrated system to the interpretation of thermal images for person identification. Moreover, the AIM approach will be described together with the proposed integrity requirements. Finally, this paper will show some results obtained in the project during the first test campaign performed on November 2010. On that day, a prototype was flown in three different missions to assess its high-level performance and to observe some fundamental mission parameters as the optimal flying height and flying speed to enable body recognition. The second test campaign is scheduled for the end of 2011.

  10. Initializing carbon cycle predictions from the Community Land Model by assimilating global biomass observations

    NASA Astrophysics Data System (ADS)

    Fox, A. M.; Hoar, T. J.; Smith, W. K.; Moore, D. J.

    2017-12-01

    The locations and longevity of terrestrial carbon sinks remain uncertain, however it is clear that in order to predict long-term climate changes the role of the biosphere in surface energy and carbon balance must be understood and incorporated into earth system models (ESMs). Aboveground biomass, the amount of carbon stored in vegetation, is a key component of the terrestrial carbon cycle, representing the balance of uptake through gross primary productivity (GPP), losses from respiration, senescence and mortality over hundreds of years. The best predictions of current and future land-atmosphere fluxes are likely from the integration of process-based knowledge contained in models and information from observations of changes in carbon stocks using data assimilation (DA). By exploiting long times series, it is possible to accurately detect variability and change in carbon cycle dynamics through monitoring ecosystem states, for example biomass derived from vegetation optical depth (VOD), and use this information to initialize models before making predictions. To make maximum use of information about the current state of global ecosystems when using models we have developed a system that combines the Community Land Model (CLM) with the Data Assimilation Research Testbed (DART), a community tool for ensemble DA. This DA system is highly innovative in its complexity, completeness and capabilities. Here we described a series of activities, using both Observation System Simulation Experiments (OSSEs) and real observations, that have allowed us to quantify the potential impact of assimilating VOD data into CLM-DART on future land-atmosphere fluxes. VOD data are particularly suitable to use in this activity due to their long temporal coverage and appropriate scale when combined with CLM, but their absolute values rely on many assumptions. Therefore, we have had to assess the implications of the VOD retrieval algorithms, with an emphasis on detecting uncertainty due to

  11. Highly Accurate Quantitative Analysis Of Enantiomeric Mixtures from Spatially Frequency Encoded 1H NMR Spectra.

    PubMed

    Plainchont, Bertrand; Pitoux, Daisy; Cyrille, Mathieu; Giraud, Nicolas

    2018-02-06

    We propose an original concept to measure accurately enantiomeric excesses on proton NMR spectra, which combines high-resolution techniques based on a spatial encoding of the sample, with the use of optically active weakly orienting solvents. We show that it is possible to simulate accurately dipolar edited spectra of enantiomers dissolved in a chiral liquid crystalline phase, and to use these simulations to calibrate integrations that can be measured on experimental data, in order to perform a quantitative chiral analysis. This approach is demonstrated on a chemical intermediate for which optical purity is an essential criterion. We find that there is a very good correlation between the experimental and calculated integration ratios extracted from G-SERF spectra, which paves the way to a general method of determination of enantiomeric excesses based on the observation of 1 H nuclei.

  12. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  13. Objective analysis of observational data from the FGGE observing systems

    NASA Technical Reports Server (NTRS)

    Baker, W.; Edelmann, D.; Iredell, M.; Han, D.; Jakkempudi, S.

    1981-01-01

    An objective analysis procedure for updating the GLAS second and fourth order general atmospheric circulation models using observational data from the first GARP global experiment is described. The objective analysis procedure is based on a successive corrections method and the model is updated in a data assimilation cycle. Preparation of the observational data for analysis and the objective analysis scheme are described. The organization of the program and description of the required data sets are presented. The program logic and detailed descriptions of each subroutine are given.

  14. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.

    PubMed

    Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen

    2018-02-02

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  15. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware

    PubMed Central

    Municio, Esteban; Van de Velde, Bruno; Latré, Steven

    2018-01-01

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks. PMID:29393900

  16. A New CCI ECV Release (v2.0) to Accurately Measure the Sea Level Change (1993-2015)

    NASA Astrophysics Data System (ADS)

    Legeais, J.; Cazenave, A. A.; Ablain, M.; Gilles, G.; Johannessen, J. A.; Scharffenberg, M. G.; Timms, G.; Andersen, O. B.; Cipollini, P.; Roca, M.; Rudenko, S.; Fernandes, J.; Balmaseda, M.; Quartly, G.; Fenoglio Marc, L.; Meyssignac, B.; Benveniste, J.; Ambrozio, A.; Restano, M.

    2016-12-01

    Accurate monitoring of the sea level is required to better understand its variability and changes. Sea level is one of the Essential Climate Variables (ECV) selected in the frame of the ESA Climate Change Initiative (CCI) program. It aims at providing a long-term homogeneous and accurate sea level record. The needs and feedback of the climate research community have been collected and a first version of the sea level ECV product has been generated with the best algorithms and altimeter standards. This record (1993-2014) has been validated by the climate research community. Within phase II (2014-2016), the 15 partner consortium has prepared the production of a new reprocessed homogeneous and accurate altimeter sea level record which will be distributed in Autumn 2016. New level 2 altimeter standards developed and tested within the project as well as external contributions have been identified, processed and evaluated by comparison with a reference for different altimeter missions (TOPEX/Poseidon, Jason-1 & 2, ERS-1 & 2, Envisat and GFO). The main evolutions are associated with the wet troposphere correction (based on the GPD+ algorithm including inter calibration with respect to external sensors) but also to the orbit solutions (POE-E and GFZ15), the ERA-Interim based atmospheric corrections and the FES2014 ocean tide model. A new pole tide solution is used and anomalies are referenced to the MSS DTU15. The presentation will focus on the main achievements of the ESA CCI Sea Level project and on the description of the new SL_cci ECV release covering 1993-2015. The major steps required to produce the reprocessed 23 year climate time series will be described. The impacts of the selected level 2 altimeter standards on the SL_cci ECV have been assessed on different spatial scales (global, regional, mesoscale) and temporal scales (long-term, inter-annual, periodic). A significant improvement is expected compared to the current v1.1, with the main impacts observed on the

  17. A Model Describing Stable Coherent Synchrotron Radiation in Storage Rings

    NASA Astrophysics Data System (ADS)

    Sannibale, F.; Byrd, J. M.; Loftsdóttir, Á.; Venturini, M.; Abo-Bakr, M.; Feikes, J.; Holldack, K.; Kuske, P.; Wüstefeld, G.; Hübers, H.-W.; Warnock, R.

    2004-08-01

    We present a model describing high power stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), which enhances higher frequency coherent emission, and limits to stable emission due to an instability excited by the SR wakefield. It gives a quantitative explanation of several features of the recent observations of CSR at the BESSYII storage ring. We also use this model to optimize the performance of a source for stable CSR emission.

  18. Diffusion model to describe osteogenesis within a porous titanium scaffold.

    PubMed

    Schmitt, M; Allena, R; Schouman, T; Frasca, S; Collombet, J M; Holy, X; Rouch, P

    2016-01-01

    In this study, we develop a two-dimensional finite element model, which is derived from an animal experiment and allows simulating osteogenesis within a porous titanium scaffold implanted in ewe's hemi-mandible during 12 weeks. The cell activity is described through diffusion equations and regulated by the stress state of the structure. We compare our model to (i) histological observations and (ii) experimental data obtained from a mechanical test done on sacrificed animal. We show that our mechano-biological approach provides consistent numerical results and constitutes a useful tool to predict osteogenesis pattern.

  19. Observations of the Eclipsing Millisecond Pulsar

    NASA Astrophysics Data System (ADS)

    Bookbinder, Jay

    1990-12-01

    FRUCHTER et al. (1988a) HAVE RECENTLY DISCOVERED a 1.6 MSEC PULSAR (PSR 1957+20) IN A 9.2 HOUR ECLIPSING BINARY SYSTEM. THE UNUSUAL BEHAVIOR OF THE DISPERSION MEASURE AS A FUNCTION OF ORBITAL PHASE, AND THE DISAPPEARANCE OF THE PULSAR SIGNAL FOR 50 MINUTES DURING EACH ORBIT, IMPLIES THAT THE ECLIPSES ARE DUE TO A PULSAR-INDUCED WIND FLOWING OFF OF THE COMPANION. THE OPTICAL COUNTERPART IS A 21ST MAGNITUDE OBJECT WHICH VARIES IN INTENSITY OVER THE BINARY PERIOD; ACCURATE GROUND-BASED OBSERVATIONS ARE PREVENTED BY THE PROXIMITY (0.7") OF A 20TH MAGNITUDE K DWARF. WE PROPOSE TO OBSERVE THE OPTICAL COUNTERPART IN A TWO-PART STUDY. FIRST, THE WF/PC WILL PROVIDE ACCURATE MULTICOLOR PHOTOMETRY, ENABLING US TO DETERMINE UNCONTAMINATED MAGNITUDES AND COLORS BOTH AT MAXIMUM (ANTI-ECLIPSE) AS WELL AS AT MINIMUM (ECLIPSE). SECOND, WE PROPOSE TO OBSERVE THE EXPECTED UV LINE EMISSION WITH FOS, ALLOWING FOR AN INTIAL DETERMINATION OF THE TEMPERATURE AND DENSITY STRUCTURE AND ABUNDANCES OF THE WIND THAT IS BEING ABLATED FROM THE COMPANION. STUDY OF THIS UNIQUE SYSTEM HOLDS ENORMOUS POTENTIAL FOR THE UNDERSTANDING OF THE RADIATION FIELD OF A MILLISECOND PULSAR AND THE EVOLUTION OF LMXRBs AND MSPs IN GENERAL. WE EXPECT THESE OBSERVATIONS TO PLACE VERY SIGNIFICANT CONTRAINTS ON MODELS OF THIS UNIQUE OBJECT.

  20. Improvement of the GPS/A system for extensive observation along subduction zones around Japan

    NASA Astrophysics Data System (ADS)

    Fujimoto, H.; Kido, M.; Tadokoro, K.; Sato, M.; Ishikawa, T.; Asada, A.; Mochizuki, M.

    2011-12-01

    Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans

  1. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need

  2. Visual observation of fishes and aquatic habitat [Chapter 17

    Treesearch

    Russell F. Thurow; C. Andrew Dolloff; J. Ellen Marsden

    2012-01-01

    Whether accomplished above the water surface or performed underwater by snorkel, scuba, or hookah divers or remotely operated vehicles (ROVs); direct observation techniques are among the most effective means for obtaining accurate and often unique information on aquatic organisms in their natural surroundings. Many types of studies incorporate direct observation...

  3. Controllers, observers, and applications thereof

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)

    2011-01-01

    Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.

  4. Optimal strategies for throwing accurately

    PubMed Central

    2017-01-01

    The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed–accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error. PMID:28484641

  5. GOSAT TIR radiometric validation toward simultaneous GHG column and profile observation

    NASA Astrophysics Data System (ADS)

    Kataoka, F.; Knuteson, R. O.; Kuze, A.; Shiomi, K.; Suto, H.; Saitoh, N.

    2015-12-01

    The Greenhouse gases Observing SATellite (GOSAT) was launched on January 2009 and continues its operation for more than six years. The thermal and near infrared sensor for carbon observation Fourier-Transform Spectrometer (TANSO-FTS) onboard GOSAT measures greenhouse gases (GHG), such as CO2 and CH4, with wide and high resolution spectra from shortwave infrared (SWIR) to thermal infrared (TIR). This instrument has the advantage of being able to measure simultaneously the same field of view in different spectral ranges. The combination of column-GHG form SWIR band and vertical profile-GHG from TIR band provide better understanding and distribution of GHG, especially in troposphere. This work describes the radiometric validation and sensitivity analysis of TANSO-FTS TIR spectra, especially CO2, atmospheric window and CH4 channels with forward calculation. In this evaluation, we used accurate in-situ dataset of the HIPPO (HIAPER Pole-to-Pole Observation) airplane observation data and GOSAT vicarious calibration and validation campaign data in Railroad Valley, NV. The HIPPO aircraft campaign had taken accurate atmospheric vertical profile dataset (T, RH, O3, CO2, CH4, N2O, CO) approximately pole-to-pole from the surface to the tropopause over the ocean. We implemented these dataset for forward calculation and made the spectral correction model with respect to wavenumber and internal calibration blackbody temperature The GOSAT vicarious calibration campaign have conducted every year since 2009 near summer solstice in Railroad Valley, where high-temperature desert site. In this campaign, we have measured temperature and humidity by a radiosonde and CO2, CH4 and O3 profile by the AJAX airplane at the time of the GOSAT overpass. Sometimes, the GHG profiles over the Railroad Valley show the air mass advection in mid-troposphere depending on upper wind. These advections bring the different concentration of GHG in lower and upper troposphere. Using these cases, we made

  6. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  7. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  8. Describe Your Favorite Teacher.

    ERIC Educational Resources Information Center

    Dill, Isaac; Dill, Vicky

    1993-01-01

    A third grader describes Ms. Gonzalez, his favorite teacher, who left to accept a more lucrative teaching assignment. Ms. Gonzalez' butterflies unit covered everything from songs about social butterflies to paintings of butterfly wings, anatomy studies, and student haiku poems and biographies. Students studied biology by growing popcorn plants…

  9. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  10. An eclipsing-binary distance to the Large Magellanic Cloud accurate to two per cent.

    PubMed

    Pietrzyński, G; Graczyk, D; Gieren, W; Thompson, I B; Pilecki, B; Udalski, A; Soszyński, I; Kozłowski, S; Konorski, P; Suchomska, K; Bono, G; Moroni, P G Prada; Villanova, S; Nardetto, N; Bresolin, F; Kudritzki, R P; Storm, J; Gallenne, A; Smolec, R; Minniti, D; Kubiak, M; Szymański, M K; Poleski, R; Wyrzykowski, L; Ulaczyk, K; Pietrukowicz, P; Górski, M; Karczmarek, P

    2013-03-07

    In the era of precision cosmology, it is essential to determine the Hubble constant to an accuracy of three per cent or better. At present, its uncertainty is dominated by the uncertainty in the distance to the Large Magellanic Cloud (LMC), which, being our second-closest galaxy, serves as the best anchor point for the cosmic distance scale. Observations of eclipsing binaries offer a unique opportunity to measure stellar parameters and distances precisely and accurately. The eclipsing-binary method was previously applied to the LMC, but the accuracy of the distance results was lessened by the need to model the bright, early-type systems used in those studies. Here we report determinations of the distances to eight long-period, late-type eclipsing systems in the LMC, composed of cool, giant stars. For these systems, we can accurately measure both the linear and the angular sizes of their components and avoid the most important problems related to the hot, early-type systems. The LMC distance that we derive from these systems (49.97 ± 0.19 (statistical) ± 1.11 (systematic) kiloparsecs) is accurate to 2.2 per cent and provides a firm base for a 3-per-cent determination of the Hubble constant, with prospects for improvement to 2 per cent in the future.

  11. Investigation of a Complex Space-Time Metric to Describe Precognition of the Future

    NASA Astrophysics Data System (ADS)

    Rauscher, Elizabeth A.; Targ, Russell

    2006-10-01

    For more than 100 years scientists have attempted to determine the truth or falsity of claims that some people are able to describe and experience events or information blocked from ordinary perception. For the past 25 years, the authors of this paper - together with researchers in laboratories around the world — have carried out experiments in remote viewing. The evidence for this mode of perception, or direct knowing of distant events and objects, has convinced us of the validity of these claims. It has been widely observed that the accuracy and reliability of this sensory awareness does not diminish with either electromagnetic shielding, nor with increases in temporal or spatial separation between the percipient and the target to be described. Modern physics describes such a time-and-space independent connection between percipient and target as nonlocal. In this paper we present a geometrical model of space-time, which has already been extensively studied in the technical literature of mathematics and physics. This eight-dimensional metric is known as "complex Minkowski space," and has been shown to be consistent with our present understanding of the equations of Newton, Maxwell, Einstein, and Schrödinger. It also has the interesting property of allowing a connection of zero distance between points in the complex manifold, which appear to be separate from one another in ordinary observation. We propose a model that describes the major elements of experimental parapsychology, and at the same time is consistent with the present highly successful structure of modern physics.

  12. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  13. A Fibre-Reinforced Poroviscoelastic Model Accurately Describes the Biomechanical Behaviour of the Rat Achilles Tendon

    PubMed Central

    Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna

    2015-01-01

    Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon’s viscoelastic response. In conclusion, this model can capture the repetitive loading and unloading behaviour of intact and healthy Achilles tendons, which is a critical first step towards understanding tendon homeostasis and function as this biomechanical response changes in diseased tendons. PMID:26030436

  14. Subluminous phase velocity regions of an accurately described Gaussian laser field and laser-driven acceleration

    NASA Astrophysics Data System (ADS)

    Xie, Y. J.; Ho, Y. K.; Cao, N.; Shao, L.; Pang, J.; Chen, Z.; Zhang, S. Y.; Liu, J. R.

    2003-11-01

    By taking account of the high-order corrections to the paraxial approximation of a Gaussian beam, it has been verified that for a focused laser beam propagating in vacuum, there indeed exists a subluminous wave phase velocity region surrounding the laser beam axis. The magnitude of the phase velocity scales as Vϕm∼ c(1+ b/( kw0) 2), where Vϕm is the phase velocity of the wave, c is the speed of light in vacuum, w0 is the beam width at focus. This feature gives a reasonable explanation for the mechanism of capture and acceleration scenario.

  15. Simple three-pool model accurately describes patterns of long-term litter decomposition in diverse climates

    Treesearch

    E. Carol Adair; William J. Parton; Steven J. Del Grosso; Shendee L. Silver; Mark E. Harmon; Sonia A. Hall; Ingrid C. Burke; Stephen C. Hart

    2008-01-01

    As atmospheric CO2 increases, ecosystem carbon sequestration will largely depend on how global changes in climate will alter the balance between net primary production and decomposition. The response of primary production to climatic change has been examined using well-validated mechanistic models, but the same is not true for decomposition, a...

  16. Observing Mode Attitude Controller for the Lunar Reconnaissance Orbiter

    NASA Technical Reports Server (NTRS)

    Calhourn, Philip C.; Garrick, Joseph C.

    2007-01-01

    The Lunar Reconnaissance Orbiter (LRO) mission is the first of a series of lunar robotic spacecraft scheduled for launch in Fall 2008. LRO will spend at least one year in a low altitude polar orbit around the Moon, collecting lunar environment science and mapping data to enable future human exploration. The LRO employs a 3-axis stabilized attitude control system (ACS) whose primary control mode, the "Observing mode", provides Lunar Nadir, off-Nadir, and Inertial fine pointing for the science data collection and instrument calibration. The controller combines the capability of fine pointing with that of on-demand large angle full-sky attitude reorientation into a single ACS mode, providing simplicity of spacecraft operation as well as maximum flexibility for science data collection. A conventional suite of ACS components is employed in this mode to meet the pointing and control objectives. This paper describes the design and analysis of the primary LRO fine pointing and attitude re-orientation controller function, known as the "Observing mode" of the ACS subsystem. The control design utilizes quaternion feedback, augmented with a unique algorithm that ensures accurate Nadir tracking during large angle yaw maneuvers in the presence of high system momentum and/or maneuver rates. Results of system stability analysis and Monte Carlo simulations demonstrate that the observing mode controller can meet fine pointing and maneuver performance requirements.

  17. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  18. Sensor Technology Performance Characteristics- Field and Laboratory Observations

    EPA Science Inventory

    Observed Intangible Performance Characteristics RH and temperature impacts may be significant for some devices Internal battery lifetimes range from 4 to 24 hoursSensor packaging can interfere with accurate measurements (reactivity)Wireless communication protocols are not foolpr...

  19. Nurse students learning acute care by simulation - Focus on observation and debriefing.

    PubMed

    Abelsson, Anna; Bisholt, Birgitta

    2017-05-01

    Simulation creates the possibility to experience acute situations during nursing education which cannot easily be achieved in clinical settings. To describe how nursing students learn acute care of patients through simulation exercises, based on observation and debriefing. The study was designed as an observational study inspired by an ethnographic approach. Data was collected through observations and interviews. Data was analyzed using an interpretive qualitative content analysis. Nursing students created space for reflection when needed. There was a positive learning situation when suitable patient scenarios were presented. Observations and discussions with peers gave the students opportunities to identify their own need for knowledge, while also identifying existing knowledge. Reflections could confirm or reject their preparedness for clinical practice. The importance of working in a structured manner in acute care situations became apparent. However, negative feedback to peers was avoided, which led to a loss of learning opportunity. High fidelity simulation training as a method plays an important part in the nursing students' learning. The teacher also plays a key role by asking difficult questions and guiding students towards accurate knowledge. This makes it possible for the students to close knowledge gaps, leading to improved patient safety. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Accurate and efficient calculation of response times for groundwater flow

    NASA Astrophysics Data System (ADS)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  1. Observing Faculty from a Developmental Perspective

    ERIC Educational Resources Information Center

    Lowman, Joseph

    2014-01-01

    However accurate our observations of others' teaching and well-intended our desire to help, giving advice is a ticklish business. Teaching consultants are like psychotherapists and parents in needing to remember that our primary objective is to develop in others the skills they need to meet life's challenges on their own.

  2. Correlated flux densities from VLBI observations with the DSN

    NASA Technical Reports Server (NTRS)

    Coker, R. F.

    1992-01-01

    Correlated flux densities of extragalactic radio sources in the very long baseline interferometry (VLBI) astrometric catalog are required for the VLBI tracking of Galileo, Mars Observer, and future missions. A system to produce correlated and total flux density catalogs was developed to meet these requirements. A correlated flux density catalog of 274 sources, accurate to about 20 percent, was derived from more than 5000 DSN VLBI observations at 2.3 GHz (S-band) and 8.4 GHz (X-band) using 43 VLBI radio reference frame experiments during the period 1989-1992. Various consistency checks were carried out to ensure the accuracy of the correlated flux densities. All observations were made on the California-Spain and California-Australia DSN baselines using the Mark 3 wideband data acquisition system. A total flux density catalog, accurate to about 20 percent, with data on 150 sources, was also created. Together, these catalogs can be used to predict source strengths to assist in the scheduling of VLBI tracking passes. In addition, for those sources with sufficient observations, a rough estimate of source structure parameters can be made.

  3. Sleep deprivation impairs the accurate recognition of human emotions.

    PubMed

    van der Helm, Els; Gujar, Ninad; Walker, Matthew P

    2010-03-01

    Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Experimental laboratory study. Thirty-seven healthy participants, (21 females) aged 18-25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues.

  4. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  5. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  6. Accurate Encoding and Decoding by Single Cells: Amplitude Versus Frequency Modulation

    PubMed Central

    Micali, Gabriele; Aquino, Gerardo; Richards, David M.; Endres, Robert G.

    2015-01-01

    Cells sense external concentrations and, via biochemical signaling, respond by regulating the expression of target proteins. Both in signaling networks and gene regulation there are two main mechanisms by which the concentration can be encoded internally: amplitude modulation (AM), where the absolute concentration of an internal signaling molecule encodes the stimulus, and frequency modulation (FM), where the period between successive bursts represents the stimulus. Although both mechanisms have been observed in biological systems, the question of when it is beneficial for cells to use either AM or FM is largely unanswered. Here, we first consider a simple model for a single receptor (or ion channel), which can either signal continuously whenever a ligand is bound, or produce a burst in signaling molecule upon receptor binding. We find that bursty signaling is more accurate than continuous signaling only for sufficiently fast dynamics. This suggests that modulation based on bursts may be more common in signaling networks than in gene regulation. We then extend our model to multiple receptors, where continuous and bursty signaling are equivalent to AM and FM respectively, finding that AM is always more accurate. This implies that the reason some cells use FM is related to factors other than accuracy, such as the ability to coordinate expression of multiple genes or to implement threshold crossing mechanisms. PMID:26030820

  7. RapGene: a fast and accurate strategy for synthetic gene assembly in Escherichia coli

    PubMed Central

    Zampini, Massimiliano; Stevens, Pauline Rees; Pachebat, Justin A.; Kingston-Smith, Alison; Mur, Luis A. J.; Hayes, Finbarr

    2015-01-01

    The ability to assemble DNA sequences de novo through efficient and powerful DNA fabrication methods is one of the foundational technologies of synthetic biology. Gene synthesis, in particular, has been considered the main driver for the emergence of this new scientific discipline. Here we describe RapGene, a rapid gene assembly technique which was successfully tested for the synthesis and cloning of both prokaryotic and eukaryotic genes through a ligation independent approach. The method developed in this study is a complete bacterial gene synthesis platform for the quick, accurate and cost effective fabrication and cloning of gene-length sequences that employ the widely used host Escherichia coli. PMID:26062748

  8. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  9. Modeling σ-Bond Activations by Nickel(0) Beyond Common Approximations: How Accurately Can We Describe Closed-Shell Oxidative Addition Reactions Mediated by Low-Valent Late 3d Transition Metal?

    PubMed

    Hu, Lianrui; Chen, Kejuan; Chen, Hui

    2017-10-10

    Accurate modelings of reactions involving 3d transition metals (TMs) are very challenging to both ab initio and DFT approaches. To gain more knowledge in this field, we herein explored typical σ-bond activations of H-H, C-H, C-Cl, and C-C bonds promoted by nickel(0), a low-valent late 3d TM. For the key parameters of activation energy (ΔE ‡ ) and reaction energy (ΔE R ) for these reactions, various issues related to the computational accuracy were systematically investigated. From the scrutiny of convergence issue with one-electron basis set, augmented (A) basis functions are found to be important, and the CCSD(T)/CBS level with complete basis set (CBS) limit extrapolation based on augmented double-ζ and triple-ζ basis pair (ADZ and ATZ), which produces deviations below 1 kcal/mol from the reference, is recommended for larger systems. As an alternative, the explicitly correlated F12 method can accelerate the basis set convergence further, especially after its CBS extrapolations. Thus, the CCSD(T)-F12/CBS(ADZ-ATZ) level with computational cost comparable to the conventional CCSD(T)/CBS(ADZ-ATZ) level, is found to reach the accuracy of the conventional CCSD(T)/A5Z level, which produces deviations below 0.5 kcal/mol from the reference, and is also highly recommendable. Scalar relativistic effects and 3s3p core-valence correlation are non-negligible for achieving chemical accuracy of around 1 kcal/mol. From the scrutiny of convergence issue with the N-electron basis set, in comparison with the reference CCSDTQ result, CCSD(T) is found to be able to calculate ΔE ‡ quite accurately, which is not true for the ΔE R calculations. Using highest-level CCSD(T) results of ΔE ‡ in this work as references, we tested 18 DFT methods and found that PBE0 and CAM-B3LYP are among the three best performing functionals, irrespective of DFT empirical dispersion correction. With empirical dispersion correction included, ωB97XD is also recommendable due to its improved

  10. Communication: a density functional with accurate fractional-charge and fractional-spin behaviour for s-electrons.

    PubMed

    Johnson, Erin R; Contreras-García, Julia

    2011-08-28

    We develop a new density-functional approach combining physical insight from chemical structure with treatment of multi-reference character by real-space modeling of the exchange-correlation hole. We are able to recover, for the first time, correct fractional-charge and fractional-spin behaviour for atoms of groups 1 and 2. Based on Becke's non-dynamical correlation functional [A. D. Becke, J. Chem. Phys. 119, 2972 (2003)] and explicitly accounting for core-valence separation and pairing effects, this method is able to accurately describe dissociation and strong correlation in s-shell many-electron systems. © 2011 American Institute of Physics

  11. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  12. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  13. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    NASA Astrophysics Data System (ADS)

    Simmons, Daniel; Cools, Kristof; Sewell, Phillip

    2016-11-01

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.

  14. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less

  15. GEOS observation systems intercomparison investigation results

    NASA Technical Reports Server (NTRS)

    Berbert, J. H.

    1974-01-01

    The results of an investigation designed to determine the relative accuracy and precision of the different types of geodetic observation systems used by NASA is presented. A collocation technique was used to minimize the effects of uncertainties in the relative station locations and in the earth's gravity field model by installing accurate reference tracking systems close to the systems to be compared, and by precisely determining their relative survey. The Goddard laser and camera systems were shipped to selected sites, where they tracked the GEOS satellite simultaneously with other systems for an intercomparison observation.

  16. WFPC2 Observations of Astrophysically Important Visual Binaries

    NASA Astrophysics Data System (ADS)

    Bond, Howard

    1997-07-01

    We recently used WFPC2 images of Procyon A and B to measure an extremely accurate separation of the bright F star and its much fainter white-dwarf companion. Combined with ground-based astrometry of the bright star, our observation significantly revises downward the derived masses, and brings Procyon A into excellent agreement with theoretical evolutionary tracks for the first time. We now propose to begin a modest but long-term program of WFPC2 measurements of astrophysically important visual binaries, working in a regime of large magnitude differences and/or faint stars where ground-based speckle interferometry cannot compete. We have selected three systems: Procyon {P=40 yr}, for which continued monitoring will even further refine the very accurate masses; Mu Cas {P=21 yr}, a famous metal-deficient G dwarf for which accurate masses will lead to the star's helium content with cosmological implications; and G 107-70, a close double white dwarf {P=18 yr} that promises to add two accurate masses to the tiny handful of white-dwarf masses that are directly known from dynamical measurements.

  17. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8

  18. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  19. SOHO Observations of a Coronal Mass Ejection

    NASA Astrophysics Data System (ADS)

    Akmal, Arya; Raymond, John C.; Vourlidas, Angelos; Thompson, Barbara; Ciaravella, A.; Ko, Y.-K.; Uzzo, M.; Wu, R.

    2001-06-01

    We describe a coronal mass ejection (CME) observed on 1999 April 23 by the Ultraviolet Coronagraph Spectrometer (UVCS), the Extreme-Ultraviolet Imaging Telescope (EIT), and the Large-Angle and Spectrometric Coronagraphs (LASCO) aboard the Solar and Heliospheric Observatory (SOHO). In addition to the O VI and C III lines typical of UVCS spectra of CMEs, this 480 km s-1 CME exhibits the forbidden and intercombination lines of O V at λλ1213.8 and 1218.4. The relative intensities of the O V lines represent an accurate electron density diagnostic not generally available at 3.5 Rsolar. By combining the density with the column density derived from LASCO, we obtain the emission measure of the ejected gas. With the help of models of the temperature and time-dependent ionization state of the expanding gas, we determine a range of heating rates required to account for the UV emission lines. The total thermal energy deposited as the gas travels to 3.5 Rsolar is comparable to the kinetic and gravitational potential energies. We note a core of colder material radiating in C III, surrounded by hotter material radiating in the O V and O VI lines. This concentration of the coolest material into small regions may be a common feature of CMEs. This event thus represents a unique opportunity to describe the morphology of a CME, and to characterize its plasma parameters.

  20. Observing Double Stars

    NASA Astrophysics Data System (ADS)

    Genet, Russell M.; Fulton, B. J.; Bianco, Federica B.; Martinez, John; Baxter, John; Brewer, Mark; Carro, Joseph; Collins, Sarah; Estrada, Chris; Johnson, Jolyon; Salam, Akash; Wallen, Vera; Warren, Naomi; Smith, Thomas C.; Armstrong, James D.; McGaughey, Steve; Pye, John; Mohanan, Kakkala; Church, Rebecca

    2012-05-01

    Double stars have been systematically observed since William Herschel initiated his program in 1779. In 1803 he reported that, to his surprise, many of the systems he had been observing for a quarter century were gravitationally bound binary stars. In 1830 the first binary orbital solution was obtained, leading eventually to the determination of stellar masses. Double star observations have been a prolific field, with observations and discoveries - often made by students and amateurs - routinely published in a number of specialized journals such as the Journal of Double Star Observations. All published double star observations from Herschel's to the present have been incorporated in the Washington Double Star Catalog. In addition to reviewing the history of visual double stars, we discuss four observational technologies and illustrate these with our own observational results from both California and Hawaii on telescopes ranging from small SCTs to the 2-meter Faulkes Telescope North on Haleakala. Two of these technologies are visual observations aimed primarily at published "hands-on" student science education, and CCD observations of both bright and very faint doubles. The other two are recent technologies that have launched a double star renaissance. These are lucky imaging and speckle interferometry, both of which can use electron-multiplying CCD cameras to allow short (30 ms or less) exposures that are read out at high speed with very low noise. Analysis of thousands of high speed exposures allows normal seeing limitations to be overcome so very close doubles can be accurately measured.

  1. Pharmacobezoars described and demystified.

    PubMed

    Simpson, Serge-Emile

    2011-02-01

    A bezoar is a concretion of foreign material that forms and persists in the gastrointestinal tract. Bezoars are classified by their material origins. Phytobezoars contain plant material, trichobezoars contain hair, lactobezoars contain milk proteins, and pharmacobezoars contain pharmaceutical products. Tablets, suspensions, and even insoluble drug delivery vehicles can, on rare occasions, and sometimes under specific circumstances, form pharmacobezoars. The goal of this review is to catalog and examine all of the available reports in the English language medical literature that convincingly describe the formation and management of pharmacobezoars. Articles included in this review were identified by performing searches using the terms "bezoar," "pharmacobezoar," and "concretion" in the following databases: OVID MEDLINE, PubMed, and JSTOR. The complete MEDLINE and JSTOR holdings were included in the search without date ranges. The results were limited to English language publications. Articles that described nonmedication bezoars were not included in the review. Articles describing phytobezoars, food bezoars, fecal impactions, illicit drug packet ingestions, enteral feeding material bezoars, and hygroscopic diet aid bezoars were excluded. The bibliographic references within the articles already accumulated were then examined in order to gather additional pharmacobezoar cases. The cases are grouped by pharmaceutical agent that formed the bezoar, and groupings are arranged in alphabetical order. Discussions and conclusions specific to each pharmaceutical agent are included in that agent's subheading. Patterns and themes that emerged in the review of the assembled case reports are reviewed and presented in a more concise format. Pharmacobezoars form under a wide variety of circumstances and in a wide variety of patients. They are difficult to diagnose reliably. Rules for suspecting, diagnosing, and properly managing a pharmacobezoar are highly dependent on the

  2. Designing and Using Research Instruments to Describe the Beliefs and Practices of Mathematics Teachers

    ERIC Educational Resources Information Center

    Swan, Malcolm

    2006-01-01

    This article describes research instruments that facilitate the description of teachers' beliefs and practices and their use with further education (FE) mathematics teachers. They consist of linked questionnaires aimed at both teachers and students, validated by cross-referencing with more open questionnaires and with classroom observation. They…

  3. Accurate structural and spectroscopic characterization of prebiotic molecules: The neutral and cationic acetyl cyanide and their related species.

    PubMed

    Bellili, A; Linguerri, R; Hochlaf, M; Puzzarini, C

    2015-11-14

    In an effort to provide an accurate structural and spectroscopic characterization of acetyl cyanide, its two enolic isomers and the corresponding cationic species, state-of-the-art computational methods, and approaches have been employed. The coupled-cluster theory including single and double excitations together with a perturbative treatment of triples has been used as starting point in composite schemes accounting for extrapolation to the complete basis-set limit as well as core-valence correlation effects to determine highly accurate molecular structures, fundamental vibrational frequencies, and rotational parameters. The available experimental data for acetyl cyanide allowed us to assess the reliability of our computations: structural, energetic, and spectroscopic properties have been obtained with an overall accuracy of about, or better than, 0.001 Å, 2 kcal/mol, 1-10 MHz, and 11 cm(-1) for bond distances, adiabatic ionization potentials, rotational constants, and fundamental vibrational frequencies, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be useful for guiding future experimental investigations and/or astronomical observations.

  4. Observing the baryon cycle in hydrodynamic cosmological simulations

    NASA Astrophysics Data System (ADS)

    Vander Vliet, Jacob Richard

    An understanding of galaxy evolution requires an understanding of the flow of baryons in and out of a galaxy. The accretion of baryons is required for galaxies to form stars, while stars eject baryons out of the galaxy through stellar feedback mechanisms such as supernovae, stellar winds, and radiation pressure. The interplay between outfiowing and infalling material form the circumgalactic medium (CGM). Hydrodynamic simulations provide understanding of the connection between stellar feedback and the distribution and kinematics of baryons in the CGM. To compare simulations and observations properly the simulated CGI must be observed in the same manner as the real CGM. I have developed the Mockspec code to generate synthetic quasar absorption line observations of the CGM in cosmological hydrodynamic simulations. Mockspec generates synthetic spectra based on the phase; lnetallicity, and kinematics of CGM gas and mimics instrumental effects. Mockspec includes automated analysis of the spectra and identifies the gas responsible for the absorption. Mockspec was applied to simulations of dwarf galaxies at low redshift to examine the observable effect different feedback models have on the CGM. While the different feedback models had strong effects on the galaxy, they all produced a similar CGM that failed match observations. Mockspec was applied to the VELA simulation suite of high redshift, high mass galaxies to examine the variance of the CGM across different galaxies in different environments. The observed CGM showed little variation between the different galaxies and almost no evolution from z=4 to z=1. The VELAs were not able to generate a CGM to match the observations. The properties of cells responsible for the absorption were compared to the derived properties from Voigt Profile decomposition. VP modeling was found to accurately describe the HI and MgII absorbing gas but failed for high ionization species such as CIV and OVI, which do not arise in the coherent

  5. Accurate deuterium spectroscopy for fundamental studies

    NASA Astrophysics Data System (ADS)

    Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.

    2018-07-01

    We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.

  6. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  7. Limitations of Poisson statistics in describing radioactive decay.

    PubMed

    Sitek, Arkadiusz; Celler, Anna M

    2015-12-01

    The assumption that nuclear decays are governed by Poisson statistics is an approximation. This approximation becomes unjustified when data acquisition times longer than or even comparable with the half-lives of the radioisotope in the sample are considered. In this work, the limits of the Poisson-statistics approximation are investigated. The formalism for the statistics of radioactive decay based on binomial distribution is derived. The theoretical factor describing the deviation of variance of the number of decays predicated by the Poisson distribution from the true variance is defined and investigated for several commonly used radiotracers such as (18)F, (15)O, (82)Rb, (13)N, (99m)Tc, (123)I, and (201)Tl. The variance of the number of decays estimated using the Poisson distribution is significantly different than the true variance for a 5-minute observation time of (11)C, (15)O, (13)N, and (82)Rb. Durations of nuclear medicine studies often are relatively long; they may be even a few times longer than the half-lives of some short-lived radiotracers. Our study shows that in such situations the Poisson statistics is unsuitable and should not be applied to describe the statistics of the number of decays in radioactive samples. However, the above statement does not directly apply to counting statistics at the level of event detection. Low sensitivities of detectors which are used in imaging studies make the Poisson approximation near perfect. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Detection of facilities in satellite imagery using semi-supervised image classification and auxiliary contextual observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Neal R; Ruggiero, Christy E; Pawley, Norma H

    2009-01-01

    Detecting complex targets, such as facilities, in commercially available satellite imagery is a difficult problem that human analysts try to solve by applying world knowledge. Often there are known observables that can be extracted by pixel-level feature detectors that can assist in the facility detection process. Individually, each of these observables is not sufficient for an accurate and reliable detection, but in combination, these auxiliary observables may provide sufficient context for detection by a machine learning algorithm. We describe an approach for automatic detection of facilities that uses an automated feature extraction algorithm to extract auxiliary observables, and a semi-supervisedmore » assisted target recognition algorithm to then identify facilities of interest. We illustrate the approach using an example of finding schools in Quickbird image data of Albuquerque, New Mexico. We use Los Alamos National Laboratory's Genie Pro automated feature extraction algorithm to find a set of auxiliary features that should be useful in the search for schools, such as parking lots, large buildings, sports fields and residential areas and then combine these features using Genie Pro's assisted target recognition algorithm to learn a classifier that finds schools in the image data.« less

  9. Accurate Mars Express orbits to improve the determination of the mass and ephemeris of the Martian moons

    NASA Astrophysics Data System (ADS)

    Rosenblatt, P.; Lainey, V.; Le Maistre, S.; Marty, J. C.; Dehant, V.; Pätzold, M.; Van Hoolst, T.; Häusler, B.

    2008-05-01

    The determination of the ephemeris of the Martian moons has benefited from observations of their plane-of-sky positions derived from images taken by cameras onboard spacecraft orbiting Mars. Images obtained by the Super Resolution Camera (SRC) onboard Mars Express (MEX) have been used to derive moon positions relative to Mars on the basis of a fit of a complete dynamical model of their motion around Mars. Since, these positions are computed from the relative position of the spacecraft when the images are taken, those positions need to be known as accurately as possible. An accurate MEX orbit is obtained by fitting two years of tracking data of the Mars Express Radio Science (MaRS) experiment onboard MEX. The average accuracy of the orbits has been estimated to be around 20-25 m. From these orbits, we have re-derived the positions of Phobos and Deimos at the epoch of the SRC observations and compared them with the positions derived by using the MEX orbits provided by the ESOC navigation team. After fit of the orbital model of Phobos and Deimos, the gain in precision in the Phobos position is roughly 30 m, corresponding to the estimated gain of accuracy of the MEX orbits. A new solution of the GM of the Martian moons has also been obtained from the accurate MEX orbits, which is consistent with previous solutions and, for Phobos, is more precise than the solution from the Mars Global Surveyor (MGS) and Mars Odyssey (ODY) tracking data. It will be further improved with data from MEX-Phobos closer encounters (at a distance less than 300 km). This study also demonstrates the advantage of combining observations of the moon positions from a spacecraft and from the Earth to assess the real accuracy of the spacecraft orbit. In turn, the natural satellite ephemerides can be improved and participate to a better knowledge of the origin and evolution of the Martian moons.

  10. Ensemble-sensitivity Analysis Based Observation Targeting for Mesoscale Convection Forecasts and Factors Influencing Observation-Impact Prediction

    NASA Astrophysics Data System (ADS)

    Hill, A.; Weiss, C.; Ancell, B. C.

    2017-12-01

    The basic premise of observation targeting is that additional observations, when gathered and assimilated with a numerical weather prediction (NWP) model, will produce a more accurate forecast related to a specific phenomenon. Ensemble-sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) is a tool capable of accurately estimating the proper location of targeted observations in areas that have initial model uncertainty and large error growth, as well as predicting the reduction of forecast variance due to the assimilated observation. ESA relates an ensemble of NWP model forecasts, specifically an ensemble of scalar forecast metrics, linearly to earlier model states. A thorough investigation is presented to determine how different factors of the forecast process are impacting our ability to successfully target new observations for mesoscale convection forecasts. Our primary goals for this work are to determine: (1) If targeted observations hold more positive impact over non-targeted (i.e. randomly chosen) observations; (2) If there are lead-time constraints to targeting for convection; (3) How inflation, localization, and the assimilation filter influence impact prediction and realized results; (4) If there exist differences between targeted observations at the surface versus aloft; and (5) how physics errors and nonlinearity may augment observation impacts.Ten cases of dryline-initiated convection between 2011 to 2013 are simulated within a simplified OSSE framework and presented here. Ensemble simulations are produced from a cycling system that utilizes the Weather Research and Forecasting (WRF) model v3.8.1 within the Data Assimilation Research Testbed (DART). A "truth" (nature) simulation is produced by supplying a 3-km WRF run with GFS analyses and integrating the model forward 90 hours, from the beginning of ensemble initialization through the end of the forecast. Target locations for surface and radiosonde observations are computed 6, 12, and

  11. Validation of Mode-S Meteorological Routine Air Report aircraft observations

    NASA Astrophysics Data System (ADS)

    Strajnar, B.

    2012-12-01

    The success of mesoscale data assimilation depends on the availability of three-dimensional observations with high spatial and temporal resolution. This paper describes an example of such observations, available through Mode-S air traffic control system composed of ground radar and transponders on board the aircraft. The meteorological information is provided by interrogation of a dedicated meteorological data register, called Meteorological Routine Air Report (MRAR). MRAR provides direct measurements of temperature and wind, but is only returned by a small fraction of aircraft. The quality of Mode-S MRAR data, collected at the Ljubljana Airport, Slovenia, is assessed by its comparison with AMDAR and high-resolution radiosonde data sets, which enable high- and low-level validation, respectively. The need for temporal smoothing of raw Mode-S MRAR data is also studied. The standard deviation of differences between smoothed Mode-S MRAR and AMDAR is 0.35°C for temperature, 0.8 m/s for wind speed and below 10 degrees for wind direction. The differences with respect to radiosondes are larger, with standard deviations of approximately 1.7°C, 3 m/s and 25 degrees for temperature, wind speed and wind direction, respectively. It is concluded that both wind and temperature observations from Mode-S MRAR are accurate and therefore potentially very useful for data assimilation in numerical weather prediction models.

  12. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  13. Observing with CHEOPS

    NASA Astrophysics Data System (ADS)

    Isaak, K. G.

    2017-09-01

    CHEOPS (CHaracterising ExOPlanet Satellite) is the first exoplanet mission dedicated to the search for transits of exoplanets by means of ultrahigh precision photometry (optical/near-IR) of bright stars already known to host planets, with launch readiness foreseen by the end of 2018. It is also the first S-class mission in ESA's Cosmic Vision 2015-2025. The mission is a partnership between Switzerland and ESA's science programme, with important contributions from 10 other member states. It will provide the unique capability of determining accurate radii for a subset of those planets in the super- Earth to Neptune mass range, for which the mass has already been estimated from ground- based spectroscopic surveys. 20% of the observing time in the 3.5 year nominal mission will be available to Guest Observers from the Community. Proposals will be requested through open calls from ESA that are foreseen to be every year, with the first 6 months before launch. In this poster I will provide an overview of how to obtain data from CHEOPS, with a particular focus on the CHEOPS Guest Observers Programme.

  14. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  15. Combining angular differential imaging and accurate polarimetry with SPHERE/IRDIS to characterize young giant exoplanets

    NASA Astrophysics Data System (ADS)

    van Holstein, Rob G.; Snik, Frans; Girard, Julien H.; de Boer, Jozua; Ginski, C.; Keller, Christoph U.; Stam, Daphne M.; Beuzit, Jean-Luc; Mouillet, David; Kasper, Markus; Langlois, Maud; Zurlo, Alice; de Kok, Remco J.; Vigan, Arthur

    2017-09-01

    Young giant exoplanets emit infrared radiation that can be linearly polarized up to several percent. This linear polarization can trace: 1) the presence of atmospheric cloud and haze layers, 2) spatial structure, e.g. cloud bands and rotational flattening, 3) the spin axis orientation and 4) particle sizes and cloud top pressure. We introduce a novel high-contrast imaging scheme that combines angular differential imaging (ADI) and accurate near-infrared polarimetry to characterize self-luminous giant exoplanets. We implemented this technique at VLT/SPHEREIRDIS and developed the corresponding observing strategies, the polarization calibration and the data-reduction approaches. The combination of ADI and polarimetry is challenging, because the field rotation required for ADI negatively affects the polarimetric performance. By combining ADI and polarimetry we can characterize planets that can be directly imaged with a very high signal-to-noise ratio. We use the IRDIS pupil-tracking mode and combine ADI and principal component analysis to reduce speckle noise. We take advantage of IRDIS' dual-beam polarimetric mode to eliminate differential effects that severely limit the polarimetric sensitivity (flat-fielding errors, differential aberrations and seeing), and thus further suppress speckle noise. To correct for instrumental polarization effects, we apply a detailed Mueller matrix model that describes the telescope and instrument and that has an absolute polarimetric accuracy <= 0.1%. Using this technique we have observed the planets of HR 8799 and the (sub-stellar) companion PZ Tel B. Unfortunately, we do not detect a polarization signal in a first analysis. We estimate preliminary 1σ upper limits on the degree of linear polarization of ˜ 1% and ˜ 0.1% for the planets of HR 8799 and PZ Tel B, respectively. The achieved sub-percent sensitivity and accuracy show that our technique has great promise for characterizing exoplanets through direct-imaging polarimetry

  16. Accurate Characterization of Rain Drop Size Distribution Using Meteorological Particle Spectrometer and 2D Video Disdrometer for Propagation and Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Thurai, Merhala; Bringi, Viswanathan; Kennedy, Patrick; Notaros, Branislav; Gatlin, Patrick

    2017-01-01

    Accurate measurements of rain drop size distributions (DSD), with particular emphasis on small and tiny drops, are presented. Measurements were conducted in two very different climate regions, namely Northern Colorado and Northern Alabama. Both datasets reveal a combination of (i) a drizzle mode for drop diameters less than 0.7 mm and (ii) a precipitation mode for larger diameters. Scattering calculations using the DSDs are performed at S and X bands and compared with radar observations for the first location. Our accurate DSDs will improve radar-based rain rate estimates as well as propagation predictions.

  17. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  18. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  19. Analysis of High Temporal and Spatial Observations of Hurricane Joaquin During TCI-15

    NASA Technical Reports Server (NTRS)

    Creasey, Robert; Elsberry, Russell L.; Velden, Chris; Cecil, Daniel J.; Bell, Michael; Hendricks, Eric A.

    2016-01-01

    Objectives: Provide an example of why analysis of high density soundings across Hurricane Joaquin also require highly accurate center positions; Describe technique for calculating 3-D zero-wind center positions from the highly accurate GPS positions of sequences of High-Density Sounding System (HDSS) soundings as they fall from 10 km to the ocean surface; Illustrate the vertical tilt of the vortex above 4-5 km during two center passes through Hurricane Joaquin on 4 October 2015.

  20. Calibration of TOMS Radiances From Ground Observations

    NASA Technical Reports Server (NTRS)

    Bojkov, B. R.; Kowalewski, M.; Wellemeyer, C.; Labow, G.; Hilsenrath, E.; Bhartia, P. K.; Ahmad, Z.

    2003-01-01

    Verification of a stratospheric ozone recovery remains a high priority for environmental research and policy definition. Models predict an ozone recovery at a much lower rate than the measured depletion rate observed to date. Therefore improved precision of the satellite and ground ozone observing systems are required over the long term to verify its recovery. We show that validation of radiances from the ground can be a very effective means for correcting long term drifts of backscatter type satellite measurements and can be used to cross calibrate all BUV instruments in orbit (TOMS, SBUV/2, GOME, SCIAMACHY, OMI, GOME-2, OMPS). This method bypasses the retrieval algorithms used to derive ozone products from both satellite and ground based measurements that are normally used to validate the satellite data. Radiance comparisons employ forward models, but they are inherently more accurate than the retrieval This method employs very accurate comparisons between ground based zenith sicy radiances and satellite nadir radiances and employs two well established capabilities at the Goddard Space Flight Center, 1) the SSBUV calibration facilities and 2) the radiative transfer codes used for the TOMS and SBUV/2 algorithms and their subsequent refinements. The zenith sky observations are made by the SSBUV where its calibration is maintained to a high degree of accuracy and precision. Radiative transfer calculations show that ground based zenith sky and satellite nadir backscatter ultraviolet comparisons can be made very accurately under certain viewing conditions. Initial ground observations taken from Goddard Space Flight Center compared with radiative transfer calculations has indicated the feasibility of this method. The effect of aerosols and varying ozone amounts are considered in the model simulations and the theoretical comparisons. The radiative transfer simulations show that the ground and satellite radiance comparisons can be made with an uncertainty of less than l

  1. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...

    2017-10-17

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  2. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  3. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2018-01-01

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.

  4. Use of negative binomial distribution to describe the presence of Anisakis in Thyrsites atun.

    PubMed

    Peña-Rehbein, Patricio; De los Ríos-Escalante, Patricio

    2012-01-01

    Nematodes of the genus Anisakis have marine fishes as intermediate hosts. One of these hosts is Thyrsites atun, an important fishery resource in Chile between 38 and 41° S. This paper describes the frequency and number of Anisakis nematodes in the internal organs of Thyrsites atun. An analysis based on spatial distribution models showed that the parasites tend to be clustered. The variation in the number of parasites per host could be described by the negative binomial distribution. The maximum observed number of parasites was nine parasites per host. The environmental and zoonotic aspects of the study are also discussed.

  5. Thermal Evolution and Radiative Output of Solar Flares Observed by the EUV Variability Experiment (EVE)

    NASA Technical Reports Server (NTRS)

    Chamberlin, P. C.; Milligan, R. O.; Woods, T. N.

    2012-01-01

    This paper describes the methods used to obtain the thermal evolution and radiative output during solar flares as observed by the Extreme u ltraviolet Variability Experiment (EVE) onboard the Solar Dynamics Ob servatory (SDO). Presented and discussed in detail are how EVE measur ements, due to its temporal cadence, spectral resolution and spectral range, can be used to determine how the thermal plasma radiates at v arious temperatures throughout the impulsive and gradual phase of fla res. EVE can very accurately determine the radiative output of flares due to pre- and in-flight calibrations. Events are presented that sh ow the total radiated output of flares depends more on the flare duration than the typical GOES X-ray peak magnitude classification. With S DO observing every flare throughout its entire duration and over a la rge temperature range, new insights into flare heating and cooling as well as the radiative energy release in EUV wavelengths support exis ting research into understanding the evolution of solar flares.

  6. Stream dynamics between 1 AU and 2 AU: A detailed comparison of observations and theory

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Pizzo, V.; Lazarus, A.; Gazis, P. R.

    1984-01-01

    A radial alignment of three solar wind stream structures observed by IMP-7 and -8 (at 1.0 AU) and Voyager 1 and 2 (in the range 1.4 to 1.8 AU) in late 1977 is presented. It is demonstrated that several important aspects of the observed dynamical evolution can be both qualitatively and quantitatively described with a single-fluid 2-D MHD numerical model of quasi-steady corotating flow, including accurate prediction of: (1) the formation of a corotating shock pair at 1.75 AU in the case of a simple, quasi-steady stream; (2) the coalescence of the thermodynamic and magnetic structures associated with the compression regions of two neighboring, interacting, corotating streams; and (3) the dynamical destruction of a small (i.e., low velocity-amplitude, short spatial-scale) stream by its overtaking of a slower moving, high-density region associated with a preceding transient flow. The evolution of these flow systems is discussed in terms of the concepts of filtering and entrainment.

  7. Estimation of a super-resolved PSF for the data reduction of undersampled stellar observations. Deriving an accurate model for fitting photometry with Corot space telescope

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, L.; Auvergne, M.; Toublanc, D.; Rowe, J.; Kuschnig, R.; Matthews, J.

    2006-06-01

    Context: .Fitting photometry algorithms can be very effective provided that an accurate model of the instrumental point spread function (PSF) is available. When high-precision time-resolved photometry is required, however, the use of point-source star images as empirical PSF models can be unsatisfactory, due to the limits in their spatial resolution. Theoretically-derived models, on the other hand, are limited by the unavoidable assumption of simplifying hypothesis, while the use of analytical approximations is restricted to regularly-shaped PSFs. Aims: .This work investigates an innovative technique for space-based fitting photometry, based on the reconstruction of an empirical but properly-resolved PSF. The aim is the exploitation of arbitrary star images, including those produced under intentional defocus. The cases of both MOST and COROT, the first space telescopes dedicated to time-resolved stellar photometry, are considered in the evaluation of the effectiveness and performances of the proposed methodology. Methods: .PSF reconstruction is based on a set of star images, periodically acquired and presenting relative subpixel displacements due to motion of the acquisition system, in this case the jitter of the satellite attitude. Higher resolution is achieved through the solution of the inverse problem. The approach can be regarded as a special application of super-resolution techniques, though a specialised procedure is proposed to better meet the PSF determination problem specificities. The application of such a model to fitting photometry is illustrated by numerical simulations for COROT and on a complete set of observations from MOST. Results: .We verify that, in both scenarios, significantly better resolved PSFs can be estimated, leading to corresponding improvements in photometric results. For COROT, indeed, subpixel reconstruction enabled the successful use of fitting algorithms despite its rather complex PSF profile, which could hardly be modeled

  8. Toward observationally constrained high space and time resolution CO2 urban emission inventories

    NASA Astrophysics Data System (ADS)

    Maness, H.; Teige, V. E.; Wooldridge, P. J.; Weichsel, K.; Holstius, D.; Hooker, A.; Fung, I. Y.; Cohen, R. C.

    2013-12-01

    The spatial patterns of greenhouse gas (GHG) emission and sequestration are currently studied primarily by sensor networks and modeling tools that were designed for global and continental scale investigations of sources and sinks. In urban contexts, by design, there has been very limited investment in observing infrastructure, making it difficult to demonstrate that we have an accurate understanding of the mechanism of emissions or the ability to track processes causing changes in those emissions. Over the last few years, our team has built a new high-resolution observing instrument to address urban CO2 emissions, the BErkeley Atmospheric CO2 Observing Network (BEACON). The 20-node network is constructed on a roughly 2 km grid, permitting direct characterization of the internal structure of emissions within the San Francisco East Bay. Here we present a first assessment of BEACON's promise for evaluating the effectiveness of current and upcoming local emissions policy. Within the next several years, a variety of locally important changes are anticipated--including widespread electrification of the motor vehicle fleet and implementation of a new power standard for ships at the port of Oakland. We describe BEACON's expected performance for detecting these changes, based on results from regional forward modeling driven by a suite of projected inventories. We will further describe the network's current change detection capabilities by focusing on known high temporal frequency changes that have already occurred; examples include a week of significant freeway traffic congestion following the temporary shutdown of the local commuter rail (the Bay Area Rapid Transit system).

  9. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  10. How to describe disordered structures

    NASA Astrophysics Data System (ADS)

    Nishio, Kengo; Miyazaki, Takehide

    2016-04-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way.

  11. Stereotypes of Age Differences in Personality Traits: Universal and Accurate?

    PubMed Central

    Chan, Wayne; McCrae, Robert R.; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E.; De Bolle, Marleen; Costa, Paul T.; Sutin, Angelina R.; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Kourilova, Sylvie; Yik, Michelle; Ficková, Emília; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E.; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R.; Crawford, Jarret T.; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R.; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A.; Gheorghiu, Mirona; Smith, Peter B.; Barbaranelli, Claduio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P.; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V.; Pramila, V. S.; Terracciano, Antonio

    2012-01-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. PMID:23088227

  12. Sigmoid function based integral-derivative observer and application to autopilot design

    NASA Astrophysics Data System (ADS)

    Shao, Xingling; Wang, Honglun; Liu, Jun; Tang, Jun; Li, Jie; Zhang, Xiaoming; Shen, Chong

    2017-02-01

    To handle problems of accurate signal reconstruction and controller implementation with integral and derivative components in the presence of noisy measurement, motivated by the design principle of sigmoid function based tracking differentiator and nonlinear continuous integral-derivative observer, a novel integral-derivative observer (SIDO) using sigmoid function is developed. The key merit of the proposed SIDO is that it can simultaneously provide continuous integral and differential estimates with almost no drift phenomena and chattering effect, as well as acceptable noise-tolerance performance from output measurement, and the stability is established based on exponential stability and singular perturbation theory. In addition, the effectiveness of SIDO in suppressing drift phenomena and high frequency noises is firstly revealed using describing function and confirmed through simulation comparisons. Finally, the theoretical results on SIDO are demonstrated with application to autopilot design: 1) the integral and tracking estimates are extracted from the sensed pitch angular rate contaminated by nonwhite noises in feedback loop, 2) the PID(proportional-integral-derivative) based attitude controller is realized by adopting the error estimates offered by SIDO instead of using the ideal integral and derivative operator to achieve satisfactory tracking performance under control constraint.

  13. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, Glyn; Horton, Keith A.; Elias, Tamar; Garbeil, Harold; Mouginis-Mark, Peter J; Sutton, A. Jeff; Harris, Andrew J. L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Kīlauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s−1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements.

  14. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and

  15. Highly accurate analytic formulae for projectile motion subjected to quadratic drag

    NASA Astrophysics Data System (ADS)

    Turkyilmazoglu, Mustafa

    2016-05-01

    The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.

  16. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  17. A Supervised Statistical Learning Approach for Accurate Legionella pneumophila Source Attribution during Outbreaks

    PubMed Central

    Buultjens, Andrew H.; Chua, Kyra Y. L.; Baines, Sarah L.; Kwong, Jason; Gao, Wei; Cutcher, Zoe; Adcock, Stuart; Ballard, Susan; Schultz, Mark B.; Tomita, Takehiro; Subasinghe, Nela; Carter, Glen P.; Pidot, Sacha J.; Franklin, Lucinda; Seemann, Torsten; Gonçalves Da Silva, Anders

    2017-01-01

    ABSTRACT Public health agencies are increasingly relying on genomics during Legionnaires' disease investigations. However, the causative bacterium (Legionella pneumophila) has an unusual population structure, with extreme temporal and spatial genome sequence conservation. Furthermore, Legionnaires' disease outbreaks can be caused by multiple L. pneumophila genotypes in a single source. These factors can confound cluster identification using standard phylogenomic methods. Here, we show that a statistical learning approach based on L. pneumophila core genome single nucleotide polymorphism (SNP) comparisons eliminates ambiguity for defining outbreak clusters and accurately predicts exposure sources for clinical cases. We illustrate the performance of our method by genome comparisons of 234 L. pneumophila isolates obtained from patients and cooling towers in Melbourne, Australia, between 1994 and 2014. This collection included one of the largest reported Legionnaires' disease outbreaks, which involved 125 cases at an aquarium. Using only sequence data from L. pneumophila cooling tower isolates and including all core genome variation, we built a multivariate model using discriminant analysis of principal components (DAPC) to find cooling tower-specific genomic signatures and then used it to predict the origin of clinical isolates. Model assignments were 93% congruent with epidemiological data, including the aquarium Legionnaires' disease outbreak and three other unrelated outbreak investigations. We applied the same approach to a recently described investigation of Legionnaires' disease within a UK hospital and observed a model predictive ability of 86%. We have developed a promising means to breach L. pneumophila genetic diversity extremes and provide objective source attribution data for outbreak investigations. IMPORTANCE Microbial outbreak investigations are moving to a paradigm where whole-genome sequencing and phylogenetic trees are used to support epidemiological

  18. An algorithm to detect and communicate the differences in computational models describing biological systems.

    PubMed

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-02-15

    Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.

  19. Comparisons with observational and experimental manipulation data imply needed conceptual changes to ESM land models

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Zhu, Q.; Tang, J.

    2016-12-01

    The land models integrated in Earth System Models (ESMs) are critical components necessary to predict soil carbon dynamics and carbon-climate interactions under a changing climate. Yet, these models have been shown to have poor predictive power when compared with observations and ignore many processes known to the observational communities to influence above and belowground carbon dynamics. Here I will report work to tightly couple observations and perturbation experiment results with development of an ESM land model (ALM), focusing on nutrient constraints of the terrestrial C cycle. Using high-frequency flux tower observations and short-term nitrogen and phosphorus perturbation experiments, we show that conceptualizing plant and soil microbe interactions as a multi-substrate, multi-competitor kinetic network allows for accurate prediction of nutrient acquisition. Next, using multiple-year FACE and fertilization response observations at many forest sites, we show that capturing the observed responses requires representation of dynamic allocation to respond to the resulting stresses. Integrating the mechanisms implied by these observations into ALM leads to much lower observational bias and to very different predictions of long-term soil and aboveground C stocks and dynamics, and therefore C-climate feedbacks. I describe how these types of observational constraints are being integrated into the open-source International Land Model Benchmarking (ILAMB) package, and end with the argument that consolidating as many observations of all sorts for easy use by modelers is an important goal to improve C-climate feedback predictions.

  20. Describing the Sequence of Cognitive Decline in Alzheimer's Disease Patients: Results from an Observational Study.

    PubMed

    Henneges, Carsten; Reed, Catherine; Chen, Yun-Fei; Dell'Agnello, Grazia; Lebrec, Jeremie

    2016-01-01

    Improved understanding of the pattern of cognitive decline in Alzheimer's disease (AD) would be useful to assist primary care physicians in explaining AD progression to patients and caregivers. To identify the sequence in which cognitive abilities decline in community-dwelling patients with AD. Baseline data were analyzed from 1,495 patients diagnosed with probable AD and a Mini-Mental State Examination (MMSE) score ≤ 26 enrolled in the 18-month observational GERAS study. Proportional odds logistic regression models were applied to model MMSE subscores (orientation, registration, attention and concentration, recall, language, and drawing) and the corresponding subscores of the cognitive subscale of the Alzheimer's Disease Assessment Scale (ADAS-cog), using MMSE total score as the index of disease progression. Probabilities of impairment start and full impairment were estimated at each MMSE total score level. From the estimated probabilities for each MMSE subscore as a function of the MMSE total score, the first aspect of cognition to start being impaired was recall, followed by orientation in time, attention and concentration, orientation in place, language, drawing, and registration. For full impairment in subscores, the sequence was recall, drawing, attention and concentration, orientation in time, orientation in place, registration, and language. The sequence of cognitive decline for the corresponding ADAS-cog subscores was remarkably consistent with this pattern. The sequence of cognitive decline in AD can be visualized in an animation using probability estimates for key aspects of cognition. This might be useful for clinicians to set expectations on disease progression for patients and caregivers.

  1. How to describe disordered structures

    PubMed Central

    Nishio, Kengo; Miyazaki, Takehide

    2016-01-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way. PMID:27064833

  2. Accurate visible speech synthesis based on concatenating variable length motion capture data.

    PubMed

    Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara

    2006-01-01

    We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.

  3. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  4. WARM SPITZER OBSERVATIONS OF THREE HOT EXOPLANETS: XO-4b, HAT-P-6b, AND HAT-P-8b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todorov, Kamen O.; Deming, Drake; Knutson, Heather A.

    2012-02-10

    We analyze Warm Spitzer/Infrared Array Camera observations of the secondary eclipses of three planets, XO-4b, HAT-P-6b, and HAT-P-8b. We measure secondary eclipse amplitudes at 3.6 {mu}m and 4.5 {mu}m for each target. XO-4b exhibits a stronger eclipse depth at 4.5 {mu}m than at 3.6 {mu}m, which is consistent with the presence of a temperature inversion. HAT-P-8b shows a stronger eclipse amplitude at 3.6 {mu}m and is best described by models without a temperature inversion. The eclipse depths of HAT-P-6b can be fitted with models with a small or no temperature inversion. We consider our results in the context of amore » postulated relationship between stellar activity and temperature inversion and a relationship between irradiation level and planet dayside temperature, as discussed by Knutson et al. and Cowan and Agol, respectively. Our results are consistent with these hypotheses, but do not significantly strengthen them. To measure accurate secondary eclipse central phases, we require accurate ephemerides. We obtain primary transit observations and supplement them with publicly available observations to update the orbital ephemerides of the three planets. Based on the secondary eclipse timing, we set upper boundaries for ecos ({omega}) for HAT-P-6b, HAT-P-8b, and XO-4b and find that the values are consistent with circular orbits.« less

  5. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning.

    PubMed

    Norouzzadeh, Mohammad Sadegh; Nguyen, Anh; Kosmala, Margaret; Swanson, Alexandra; Palmer, Meredith S; Packer, Craig; Clune, Jeff

    2018-06-19

    Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into "big data" sciences. Motion-sensor "camera traps" enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild. Copyright © 2018 the Author(s). Published by PNAS.

  6. Exact solutions of magnetohydrodynamics for describing different structural disturbances in solar wind

    NASA Astrophysics Data System (ADS)

    Grib, S. A.; Leora, S. N.

    2016-03-01

    We use analytical methods of magnetohydrodynamics to describe the behavior of cosmic plasma. This approach makes it possible to describe different structural fields of disturbances in solar wind: shock waves, direction discontinuities, magnetic clouds and magnetic holes, and their interaction with each other and with the Earth's magnetosphere. We note that the wave problems of solar-terrestrial physics can be efficiently solved by the methods designed for solving classical problems of mathematical physics. We find that the generalized Riemann solution particularly simplifies the consideration of secondary waves in the magnetosheath and makes it possible to describe in detail the classical solutions of boundary value problems. We consider the appearance of a fast compression wave in the Earth's magnetosheath, which is reflected from the magnetosphere and can nonlinearly overturn to generate a back shock wave. We propose a new mechanism for the formation of a plateau with protons of increased density and a magnetic field trough in the magnetosheath due to slow secondary shock waves. Most of our findings are confirmed by direct observations conducted on spacecrafts (WIND, ACE, Geotail, Voyager-2, SDO and others).

  7. Describing content in middle school science curricula

    NASA Astrophysics Data System (ADS)

    Schwarz-Ballard, Jennifer A.

    As researchers and designers, we intuitively recognize differences between curricula and describe them in terms of design strategy: project-based, laboratory-based, modular, traditional, and textbook, among others. We assume that practitioners recognize the differences in how each requires that students use knowledge, however these intuitive differences have not been captured or systematically described by the existing languages for describing learning goals. In this dissertation I argue that we need new ways of capturing relationships among elements of content, and propose a theory that describes some of the important differences in how students reason in differently designed curricula and activities. Educational researchers and curriculum designers have taken a variety of approaches to laying out learning goals for science. Through an analysis of existing descriptions of learning goals I argue that to describe differences in the understanding students come away with, they need to (1) be specific about the form of knowledge, (2) incorporate both the processes through which knowledge is used and its form, and (3) capture content development across a curriculum. To show the value of inquiry curricula, learning goals need to incorporate distinctions among the variety of ways we ask students to use knowledge. Here I propose the Epistemic Structures Framework as one way to describe differences in students reasoning that are not captured by existing descriptions of learning goals. The usefulness of the Epistemic Structures framework is demonstrated in the four curriculum case study examples in Part II of this work. The curricula in the case studies represent a range of content coverage, curriculum structure, and design rationale. They serve both to illustrate the Epistemic Structures analysis process and make the case that it does in fact describe learning goals in a way that captures important differences in students reasoning in differently designed curricula

  8. SPICE Supports Planetary Science Observation Geometry

    NASA Astrophysics Data System (ADS)

    Hall Acton, Charles; Bachman, Nathaniel J.; Semenov, Boris V.; Wright, Edward D.

    2015-11-01

    "SPICE" is an information system, comprising both data and software, providing scientists with the observation geometry needed to plan observations from instruments aboard robotic spacecraft, and to subsequently help in analyzing the data returned from those observations. The SPICE system has been used on the majority of worldwide planetary exploration missions since the time of NASA's Galileo mission to Jupiter. Along with its "free" price tag, portability and the absence of licensing and export restrictions, its stable, enduring qualities help make it a popular choice. But stability does not imply rigidity-improvements and new capabilities are regularly added. This poster highlights recent additions that could be of interest to planetary scientists.Geometry Finder allows one to find all the times or time intervals when a particular geometric condition exists (e.g. occultation) or when a particular geometric parameter is within a given range or has reached a maximum or minimum.Digital Shape Kernel (DSK) provides means to compute observation geometry using accurately modeled target bodies: a tessellated plate model for irregular bodies and a digital elevation model for large, regular bodies.WebGeocalc (WGC) provides a graphical user interface (GUI) to a SPICE "geometry engine" installed at a mission operations facility, such as the one operated by NAIF. A WGC user need have only a computer with a web browser to access this geometry engine. Using traditional GUI widgets-drop-down menus, check boxes, radio buttons and fill-in boxes-the user inputs the data to be used, the kind of calculation wanted, and the details of that calculation. The WGC server makes the specified calculations and returns results to the user's browser.Cosmographia is a mission visualization program. This tool provides 3D visualization of solar system (target) bodies, spacecraft trajectory and orientation, instrument field-of-view "cones" and footprints, and more.The research described in this

  9. WFPC2 Observations of Astrophysically Important Visual Binaries - Continued

    NASA Astrophysics Data System (ADS)

    Bond, Howard

    1999-07-01

    We recently used WFPC2 images of Procyon A and B to measure an extremely accurate separation of the bright F star and its much fainter white-dwarf companion. Combined with ground-based astrometry of the bright star, our observation significantly revises downward the derived masses, and brings Procyon A into excellent agreement with theoretical evolutionary tracks for the first time. We now propose to begin a modest but long-term program of WFPC2 measurements of astrophysically important visual binaries, working in a regime of large magnitude differences and/or faint stars where ground-based speckle interferometry cannot compete. We have selected three systems: Procyon {P=40 yr}, for which continued monitoring will even further refine the very accurate masses; Mu Cas {P=21 yr}, a famous metal-deficient G dwarf for which accurate masses will lead to the star's helium content with cosmological implications; and G 107-70, a close double white dwarf {P=18 yr} that promises to add two accurate masses to the tiny handful of white-dwarf masses that are directly known from dynamical measurements.

  10. Apparatus for accurate density measurements of fluids based on a magnetic suspension balance

    NASA Astrophysics Data System (ADS)

    Gong, Maoqiong; Li, Huiya; Guo, Hao; Dong, Xueqiang; Wu, J. F.

    2012-06-01

    A new apparatus for accurate pressure, density and temperature (p, ρ, T) measurements over wide ranges of (p, ρ, T) (90 K to 290 K; 0 MPa to 3 MPa; 0 kg/m3 to 2000 kg/m3) is described. This apparatus is based on a magnetic suspension balance which applies the Archimedes' buoyancy principle. In order to verify the new apparatus, comprehensive (p, ρ, T) measurements on pure nitrogen were carried out. The maximum relative standard uncertainty is 0.09% in density. The maximum standard uncertainty in temperature is 5 mK, and that in pressure is 250 Pa for 1.5 MPa and 390 Pa for 3MPa full scale range respectively. The experimental data were compared with selected literature data and good agreements were found.

  11. An Accurate Centroiding Algorithm for PSF Reconstruction

    NASA Astrophysics Data System (ADS)

    Lu, Tianhuan; Luo, Wentao; Zhang, Jun; Zhang, Jiajun; Li, Hekun; Dong, Fuyu; Li, Yingke; Liu, Dezi; Fu, Liping; Li, Guoliang; Fan, Zuhui

    2018-07-01

    In this work, we present a novel centroiding method based on Fourier space Phase Fitting (FPF) for Point Spread Function (PSF) reconstruction. We generate two sets of simulations to test our method. The first set is generated by GalSim with an elliptical Moffat profile and strong anisotropy that shifts the center of the PSF. The second set of simulations is drawn from CFHT i band stellar imaging data. We find non-negligible anisotropy from CFHT stellar images, which leads to ∼0.08 scatter in units of pixels using a polynomial fitting method (Vakili & Hogg). When we apply the FPF method to estimate the centroid in real space, the scatter reduces to ∼0.04 in S/N = 200 CFHT-like sample. In low signal-to-noise ratio (S/N; 50 and 100) CFHT-like samples, the background noise dominates the shifting of the centroid; therefore, the scatter estimated from different methods is similar. We compare polynomial fitting and FPF using GalSim simulation with optical anisotropy. We find that in all S/N (50, 100, and 200) samples, FPF performs better than polynomial fitting by a factor of ∼3. In general, we suggest that in real observations there exists anisotropy that shifts the centroid, and thus, the FPF method provides a better way to accurately locate it.

  12. Retrieving Temperature Anomaly in the Global Subsurface and Deeper Ocean From Satellite Observations

    NASA Astrophysics Data System (ADS)

    Su, Hua; Li, Wene; Yan, Xiao-Hai

    2018-01-01

    Retrieving the subsurface and deeper ocean (SDO) dynamic parameters from satellite observations is crucial for effectively understanding ocean interior anomalies and dynamic processes, but it is challenging to accurately estimate the subsurface thermal structure over the global scale from sea surface parameters. This study proposes a new approach based on Random Forest (RF) machine learning to retrieve subsurface temperature anomaly (STA) in the global ocean from multisource satellite observations including sea surface height anomaly (SSHA), sea surface temperature anomaly (SSTA), sea surface salinity anomaly (SSSA), and sea surface wind anomaly (SSWA) via in situ Argo data for RF training and testing. RF machine-learning approach can accurately retrieve the STA in the global ocean from satellite observations of sea surface parameters (SSHA, SSTA, SSSA, SSWA). The Argo STA data were used to validate the accuracy and reliability of the results from the RF model. The results indicated that SSHA, SSTA, SSSA, and SSWA together are useful parameters for detecting SDO thermal information and obtaining accurate STA estimations. The proposed method also outperformed support vector regression (SVR) in global STA estimation. It will be a useful technique for studying SDO thermal variability and its role in global climate system from global-scale satellite observations.

  13. Who described Civatte bodies?

    PubMed

    Burgdorf, Walter H C; Plewig, Gerd

    2014-04-01

    Eosinophilic apoptotic (necrotic) keratinocytes in the lower epidermis and at the dermoepidermal junction are a feature of many interface dermatoses but are most reliably found in lichen planus. These structures are universally known as Civatte bodies. Nonetheless, they were first described by Raymond Sabouraud in 1912. Even after Achille Civatte discussed and beautifully illustrated them a decade later, it took until the late 1960s for the term Civatte body to win acceptance. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Using Informatics and the Electronic Medical Record to Describe Antimicrobial Use in the Clinical Management of Diarrhea Cases at 12 Companion Animal Practices

    PubMed Central

    Anholt, R. Michele; Berezowski, John; Ribble, Carl S.; Russell, Margaret L.; Stephen, Craig

    2014-01-01

    Antimicrobial drugs may be used to treat diarrheal illness in companion animals. It is important to monitor antimicrobial use to better understand trends and patterns in antimicrobial resistance. There is no monitoring of antimicrobial use in companion animals in Canada. To explore how the use of electronic medical records could contribute to the ongoing, systematic collection of antimicrobial use data in companion animals, anonymized electronic medical records were extracted from 12 participating companion animal practices and warehoused at the University of Calgary. We used the pre-diagnostic, clinical features of diarrhea as the case definition in this study. Using text-mining technologies, cases of diarrhea were described by each of the following variables: diagnostic laboratory tests performed, the etiological diagnosis and antimicrobial therapies. The ability of the text miner to accurately describe the cases for each of the variables was evaluated. It could not reliably classify cases in terms of diagnostic tests or etiological diagnosis; a manual review of a random sample of 500 diarrhea cases determined that 88/500 (17.6%) of the target cases underwent diagnostic testing of which 36/88 (40.9%) had an etiological diagnosis. Text mining, compared to a human reviewer, could accurately identify cases that had been treated with antimicrobials with high sensitivity (92%, 95% confidence interval, 88.1%–95.4%) and specificity (85%, 95% confidence interval, 80.2%–89.1%). Overall, 7400/15,928 (46.5%) of pets presenting with diarrhea were treated with antimicrobials. Some temporal trends and patterns of the antimicrobial use are described. The results from this study suggest that informatics and the electronic medical records could be useful for monitoring trends in antimicrobial use. PMID:25057893

  15. Using informatics and the electronic medical record to describe antimicrobial use in the clinical management of diarrhea cases at 12 companion animal practices.

    PubMed

    Anholt, R Michele; Berezowski, John; Ribble, Carl S; Russell, Margaret L; Stephen, Craig

    2014-01-01

    Antimicrobial drugs may be used to treat diarrheal illness in companion animals. It is important to monitor antimicrobial use to better understand trends and patterns in antimicrobial resistance. There is no monitoring of antimicrobial use in companion animals in Canada. To explore how the use of electronic medical records could contribute to the ongoing, systematic collection of antimicrobial use data in companion animals, anonymized electronic medical records were extracted from 12 participating companion animal practices and warehoused at the University of Calgary. We used the pre-diagnostic, clinical features of diarrhea as the case definition in this study. Using text-mining technologies, cases of diarrhea were described by each of the following variables: diagnostic laboratory tests performed, the etiological diagnosis and antimicrobial therapies. The ability of the text miner to accurately describe the cases for each of the variables was evaluated. It could not reliably classify cases in terms of diagnostic tests or etiological diagnosis; a manual review of a random sample of 500 diarrhea cases determined that 88/500 (17.6%) of the target cases underwent diagnostic testing of which 36/88 (40.9%) had an etiological diagnosis. Text mining, compared to a human reviewer, could accurately identify cases that had been treated with antimicrobials with high sensitivity (92%, 95% confidence interval, 88.1%-95.4%) and specificity (85%, 95% confidence interval, 80.2%-89.1%). Overall, 7400/15,928 (46.5%) of pets presenting with diarrhea were treated with antimicrobials. Some temporal trends and patterns of the antimicrobial use are described. The results from this study suggest that informatics and the electronic medical records could be useful for monitoring trends in antimicrobial use.

  16. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  17. Atmospheric densities derived from CHAMP/STAR accelerometer observations

    NASA Astrophysics Data System (ADS)

    Bruinsma, S.; Tamagnan, D.; Biancale, R.

    2004-03-01

    The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi

  18. Scaling laws describe memories of host-pathogen riposte in the HIV population.

    PubMed

    Barton, John P; Kardar, Mehran; Chakraborty, Arup K

    2015-02-17

    The enormous genetic diversity and mutability of HIV has prevented effective control of this virus by natural immune responses or vaccination. Evolution of the circulating HIV population has thus occurred in response to diverse, ultimately ineffective, immune selection pressures that randomly change from host to host. We show that the interplay between the diversity of human immune responses and the ways that HIV mutates to evade them results in distinct sets of sequences defined by similar collectively coupled mutations. Scaling laws that relate these sets of sequences resemble those observed in linguistics and other branches of inquiry, and dynamics reminiscent of neural networks are observed. Like neural networks that store memories of past stimulation, the circulating HIV population stores memories of host-pathogen combat won by the virus. We describe an exactly solvable model that captures the main qualitative features of the sets of sequences and a simple mechanistic model for the origin of the observed scaling laws. Our results define collective mutational pathways used by HIV to evade human immune responses, which could guide vaccine design.

  19. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.

  20. Internal anatomy of the hornbill casque described by radiography, contrast radiography, and computed tomography.

    PubMed

    Gamble, Kathryn C

    2007-03-01

    Hornbills are distinguished from most other avian taxa by the presence of a casque on the dorsal maxillary beak, which, in all but 1 of the 54 extant hornbill species, is described as essentially an air-filled cavity enclosed by minimal cancellous bone. The external casque has been described in detail, but little has been described about its internal anatomy and the communications between the casque and the paranasal sinuses. In this study, 10 intact casque and skull specimens of 7 hornbill species were collected opportunistically at necropsy. The anatomy of the casque and the skull for each of the specimens was examined by radiography, contrast radiography, and computed tomography. After imaging, 8 specimens were submitted for osteologic preparation to directly visualize the casque and the skull interior. Through this standardized review, the baseline anatomy of the internal casque was described, including identification of a novel casque sinus within the paranasal sinus system. These observations will assist clinicians in the diagnosis and treatment of diseases of the casque in hornbill species.

  1. A novel disturbance-observer based friction compensation scheme for ball and plate system.

    PubMed

    Wang, Yongkun; Sun, Mingwei; Wang, Zenghui; Liu, Zhongxin; Chen, Zengqiang

    2014-03-01

    Friction is often ignored when designing a controller for the ball and plate system, which can lead to steady-error and stick-slip phenomena, especially for the small amplitude command. It is difficult to achieve high-precision control performance for the ball and plate system because of its friction. A novel reference compensation strategy is presented to attenuate the aftereffects caused by the friction. To realize this strategy, a linear control law is proposed based on a reduced-order observer. Neither the accurate friction model nor the estimation of specific characteristic parameters is needed in this design. Moreover, the describing function method illustrates that the limit cycle can be avoided. Finally, the comparative mathematical simulations and the practical experiments are used to validate the effectiveness of the proposed method. © 2013 ISA Published by ISA All rights reserved.

  2. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  3. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  4. A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever

    PubMed Central

    Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J.; Scott, Dana P.; Feldmann, Heinz; Ebihara, Hideki

    2016-01-01

    Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF. PMID:27976688

  5. A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever.

    PubMed

    Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J; Scott, Dana P; Feldmann, Heinz; Ebihara, Hideki

    2016-12-15

    Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF.

  6. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations.

    PubMed

    Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng

    2018-05-03

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  7. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    PubMed Central

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  8. Describing sequencing results of structural chromosome rearrangements with a suggested next-generation cytogenetic nomenclature.

    PubMed

    Ordulu, Zehra; Wong, Kristen E; Currall, Benjamin B; Ivanov, Andrew R; Pereira, Shahrin; Althari, Sara; Gusella, James F; Talkowski, Michael E; Morton, Cynthia C

    2014-05-01

    With recent rapid advances in genomic technologies, precise delineation of structural chromosome rearrangements at the nucleotide level is becoming increasingly feasible. In this era of "next-generation cytogenetics" (i.e., an integration of traditional cytogenetic techniques and next-generation sequencing), a consensus nomenclature is essential for accurate communication and data sharing. Currently, nomenclature for describing the sequencing data of these aberrations is lacking. Herein, we present a system called Next-Gen Cytogenetic Nomenclature, which is concordant with the International System for Human Cytogenetic Nomenclature (2013). This system starts with the alignment of rearrangement sequences by BLAT or BLAST (alignment tools) and arrives at a concise and detailed description of chromosomal changes. To facilitate usage and implementation of this nomenclature, we are developing a program designated BLA(S)T Output Sequence Tool of Nomenclature (BOSToN), a demonstrative version of which is accessible online. A standardized characterization of structural chromosomal rearrangements is essential both for research analyses and for application in the clinical setting. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  9. Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.

    ERIC Educational Resources Information Center

    Zuckerman, June Trop

    This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…

  10. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    USGS Publications Warehouse

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  11. Models to describe the thermal development rates of Cycloneda sanguinea L. (Coleoptera: Coccinelidae).

    PubMed

    Pachú, Jéssica Ks; Malaquias, José B; Godoy, Wesley Ac; de S Ramalho, Francisco; Almeida, Bruna R; Rossi, Fabrício

    2018-04-01

    Precise estimates of the lower (T min ) and higher (T max ) thermal thresholds as well as the temperature range that provides optimum performance (T opt ) enable to obtain the desired number of individuals in conservation systems, rearing and release of natural enemies. In this study, the relationship between the development rates of Cycloneda sanguinea L. (Coleoptera: Coccinelidae) and temperature was described using non-linear models developed by Analytis, Brière, Lactin, Lamb, Logan and Sharpe & DeMichele. There were differences between the models, considering the estimates of the parameters T min , T max , and T opt . All of the tested models were able to describe non-linear responses involving the development rates of C. sanguinea at constant temperatures. Lactin and Lamb gave the highest z weight for egg, while Analytis, Sharpe & DeMichele and Brière gave the highest values for larvae and pupae. The more realistic T opt estimated by the models varied from 29° to 31°C for egg, 27-28 °C for larvae and 28-29 °C for pupae. The Logan, Lactin and Analytis models estimated the T max for egg, larvae and pupae to be approximately 34 °C, while the T min estimated by the Analytis model was 16 °C for larvae and pupae. The information generated by our research will contribute towards improving the rearing and release of C. sanguinea in biological control programs, accurately controlling the rate of development in laboratory conditions or even scheduling the most favourable this species' release. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Intrinsic nonlinearity and method of disturbed observations in inverse problems of celestial mechanics

    NASA Astrophysics Data System (ADS)

    Avdyushev, Victor A.

    2017-12-01

    Orbit determination from a small sample of observations over a very short observed orbital arc is a strongly nonlinear inverse problem. In such problems an evaluation of orbital uncertainty due to random observation errors is greatly complicated, since linear estimations conventionally used are no longer acceptable for describing the uncertainty even as a rough approximation. Nevertheless, if an inverse problem is weakly intrinsically nonlinear, then one can resort to the so-called method of disturbed observations (aka observational Monte Carlo). Previously, we showed that the weaker the intrinsic nonlinearity, the more efficient the method, i.e. the more accurate it enables one to simulate stochastically the orbital uncertainty, while it is strictly exact only when the problem is intrinsically linear. However, as we ascertained experimentally, its efficiency was found to be higher than that of other stochastic methods widely applied in practice. In the present paper we investigate the intrinsic nonlinearity in complicated inverse problems of Celestial Mechanics when orbits are determined from little informative samples of observations, which typically occurs for recently discovered asteroids. To inquire into the question, we introduce an index of intrinsic nonlinearity. In asteroid problems it evinces that the intrinsic nonlinearity can be strong enough to affect appreciably probabilistic estimates, especially at the very short observed orbital arcs that the asteroids travel on for about a hundredth of their orbital periods and less. As it is known from regression analysis, the source of intrinsic nonlinearity is the nonflatness of the estimation subspace specified by a dynamical model in the observation space. Our numerical results indicate that when determining asteroid orbits it is actually very slight. However, in the parametric space the effect of intrinsic nonlinearity is exaggerated mainly by the ill-conditioning of the inverse problem. Even so, as for the

  13. Analysis of IUE Observations of Hydrogen in Comets

    NASA Technical Reports Server (NTRS)

    Combi, Michael R.; Feldman, Paul D.

    1998-01-01

    The 15-years worth of hydrogen Lyman-alpha observations of cometary comae obtained with the International Ultraviolet Explorer (IUE) satellite had gone generally unanalyzed because of two main modeling complications. First, the inner comae of many bright (gas productive) comets are often optically thick to solar Lyman-alpha radiation. Second, even in the case of a small comet (low gas production) the large IUE aperture is quite small as compared with the immense size of the hydrogen coma, so an accurate model which properly accounts for the spatial distribution of the coma is required to invert the infrared brightnesses to column densities and finally to H atom production rates. Our Monte Carlo particle trajectory model (MCPTM), which for the first time provides the realistic full phase space distribution of H atoms throughout the coma has been used as the basis for the analysis of IUE observations of the inner coma. The MCPTM includes the effects of the vectorial ejection of the H atoms upon dissociation of their parent species (H2O and OH) and of their partial collisional thermalization. Both of these effects are crucial to characterize the velocity distribution of the H atoms. This combination of the MCPTM and spherical radiative transfer code had already been shown to be successful in understanding the moderately optically thick coma of comet P/Giacobini-Zinner and the coma of comet Halley that varied from being slightly to very optically thick. Both of these comets were observed during solar minimum conditions. Solar activity affects both the photochemistry of water and the solar Lyman-alpha radiation flux. The overall plan of this program here was to concentrate on comets observed by IUE at other time during the solar cycle, most importantly during the two solar maxima of 1980 and 1990. Described herein are the work performed and the results obtained.

  14. Accurate vehicle classification including motorcycles using piezoelectric sensors.

    DOT National Transportation Integrated Search

    2013-03-01

    State and federal departments of transportation are charged with classifying vehicles and monitoring mileage traveled. Accurate data reporting enables suitable roadway design for safety and capacity. Vehicle classifiers currently employ inductive loo...

  15. A randomized trial to identify accurate and cost-effective fidelity measurement methods for cognitive-behavioral therapy: project FACTS study protocol.

    PubMed

    Beidas, Rinad S; Maclean, Johanna Catherine; Fishman, Jessica; Dorsey, Shannon; Schoenwald, Sonja K; Mandell, David S; Shea, Judy A; McLeod, Bryce D; French, Michael T; Hogue, Aaron; Adams, Danielle R; Lieberman, Adina; Becker-Haimes, Emily M; Marcus, Steven C

    2016-09-15

    This randomized trial will compare three methods of assessing fidelity to cognitive-behavioral therapy (CBT) for youth to identify the most accurate and cost-effective method. The three methods include self-report (i.e., therapist completes a self-report measure on the CBT interventions used in session while circumventing some of the typical barriers to self-report), chart-stimulated recall (i.e., therapist reports on the CBT interventions used in session via an interview with a trained rater, and with the chart to assist him/her) and behavioral rehearsal (i.e., therapist demonstrates the CBT interventions used in session via a role-play with a trained rater). Direct observation will be used as the gold-standard comparison for each of the three methods. This trial will recruit 135 therapists in approximately 12 community agencies in the City of Philadelphia. Therapists will be randomized to one of the three conditions. Each therapist will provide data from three unique sessions, for a total of 405 sessions. All sessions will be audio-recorded and coded using the Therapy Process Observational Coding System for Child Psychotherapy-Revised Strategies scale. This will enable comparison of each measurement approach to direct observation of therapist session behavior to determine which most accurately assesses fidelity. Cost data associated with each method will be gathered. To gather stakeholder perspectives of each measurement method, we will use purposive sampling to recruit 12 therapists from each condition (total of 36 therapists) and 12 supervisors to participate in semi-structured qualitative interviews. Results will provide needed information on how to accurately and cost-effectively measure therapist fidelity to CBT for youth, as well as important information about stakeholder perspectives with regard to each measurement method. Findings will inform fidelity measurement practices in future implementation studies as well as in clinical practice. NCT02820623

  16. American black bear denning behavior: Observations and applications using remote photography

    USGS Publications Warehouse

    Bridges, A.S.; Fox, J.A.; Olfenbuttel, C.; Vaughan, M.B.

    2004-01-01

    Researchers examining American black bear (Ursus americanus) denning behavior have relied primarily on den-site visitation and radiotelemetry to gather data. Repeated den-site visits are time-intensive and may disturb denning bears, possibly causing den abandonment, whereas radiotelemetry is sufficient only to provide gross data on den emergence. We used remote cameras to examine black bear denning behavior in the Allegheny Mountains of western Virginia during March-May 2003. We deployed cameras at 10 den sites and used 137 pictures of black bears. Adult female black bears exhibited greater extra-den activity than we expected prior to final den emergence, which occurred between April 12 and May 6, 2003. Our technique provided more accurate den-emergence estimation than previously published methodologies. Additionally, we observed seldom-documented behaviors associated with den exits and estimated cub age at den emergence. Remote cameras can provide unique insights into denning ecology, and we describe their potential application to reproductive, survival, and behavioral research.

  17. Differential tracking data types for accurate and efficient Mars planetary navigation

    NASA Technical Reports Server (NTRS)

    Edwards, C. D., Jr.; Kahn, R. D.; Folkner, W. M.; Border, J. S.

    1991-01-01

    Ways in which high-accuracy differential observations of two or more deep space vehicles can dramatically extend the power of earth-based tracking over conventional range and Doppler tracking are discussed. Two techniques - spacecraft-spacecraft differential very long baseline interferometry (S/C-S/C Delta(VLBI)) and same-beam interferometry (SBI) - are discussed. The tracking and navigation capabilities of conventional range, Doppler, and quasar-relative Delta(VLBI) are reviewed, and the S/C-S/C Delta (VLBI) and SBI types are introduced. For each data type, the formation of the observable is discussed, an error budget describing how physical error sources manifest themselves in the observable is presented, and potential applications of the technique for Space Exploration Initiative scenarios are examined. Requirements for spacecraft and ground systems needed to enable and optimize these types of observations are discussed.

  18. The challenge of accurately documenting bee species richness in agroecosystems: bee diversity in eastern apple orchards

    PubMed Central

    Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan

    2015-01-01

    Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured <75% of expected species richness at more than half of the sites. For most of these, the variation in bee community composition between years was greater than among sites. Species richness was influenced by percent agriculture, orchard size, and sampling effort, but we found no factors explaining the difference between observed and expected species richness. Competition between honeybees and wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness. PMID:26380684

  19. Five Describing Factors of Dyslexia

    ERIC Educational Resources Information Center

    Tamboer, Peter; Vorst, Harrie C. M.; Oort, Frans J.

    2016-01-01

    Two subtypes of dyslexia (phonological, visual) have been under debate in various studies. However, the number of symptoms of dyslexia described in the literature exceeds the number of subtypes, and underlying relations remain unclear. We investigated underlying cognitive features of dyslexia with exploratory and confirmatory factor analyses. A…

  20. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    NASA Astrophysics Data System (ADS)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  1. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  2. Van Allen Probes Observations of Plasmasphere Refilling Inside and Outside the Plasmapause

    NASA Astrophysics Data System (ADS)

    De Pascuale, S.; Kletzing, C.; Kurth, W. S.; Jordanova, V. K.

    2017-12-01

    We survey several geomagnetic storms observed by the Van Allen Probes to determine the rate of plasmasphere refilling following the initial erosion of the plasmapause region. The EMFISIS instrument on board the spacecraft provides near-equatorial in situ electron density measurements, which are accurate to 10% error in the detectable range 2 < L < 6. Two-dimensional plasmasphere density simulations, providing global context of local observations, are driven by the incident solar wind electric field as a proxy for geomagnetic activity. The simulations utilize a semi-empirical model of convection and a semi-empirical model of ionospheric outflow to dynamically evolve plasmaspheric densities. We find that at high L the plasmasphere undergoes orders of magnitude density depletion (from 100s - 10s cm-3) in response to a geomagnetic event and recovers to pre-storm levels over many days. At low L ( 1000s cm-3), and within the plasmapause, the plasmasphere loses density by a factor of 2 to 3 (from 3000 - 1000 cm-3) producing a depletion that can persist over weeks during sustained geomagnetic activity. We describe the impact of these results on the challenge of defining a saturated quiet state of the plasmasphere.

  3. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  4. Global Ocean Evaporation Increases Since 1960 in Climate Reanalyses: How Accurate Are They?

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Roberts, Jason B.; Bosilovich, Michael G.

    2016-01-01

    AGCMs w/ Specified SSTs (AMIPs) GEOS-5, ERA-20CM Ensembles Incorporate best historical estimates of SST, sea ice, radiative forcing Atmospheric "weather noise" is inconsistent with specified SST. Instantaneous Sfc fluxes can be wrong sign (e.g. Indian Ocean Monsoon, high latitude oceans). Averaging over ensemble members helps isolate SST-forced signal. Reduced Observational Reanalyses: NOAA 20CR V2C, ERA-20C, JRA-55C Incorporate observed Sfc Press (20CR), Marine Winds (ERA-20C) and rawinsondes (JRA-55C) to recover much of true synoptic or weather w/o shock of new sat obs. Comprehensive Reanalyses (MERRA-2) Full suite of observational constraints- both conventional and remote sensing. But... substantial uncertainties owing to evolving satellite observing system. Multi-source Statistically Blended OAFlux, LargeYeager Blend reanalysis, satellite, and ocean buoy information. While climatological biases are removed, non-physical trends or variations in components remain. Satellite Retrievals GSSTF3, SeaFlux, HOAPS3... Global coverage. Retrieved near sfc wind speed, & humidity used with SST to drive accurate bulk aerodynamic flux estimates. Satellite inter-calibration, spacecraft pointing variations crucial. Short record ( late 1987-present). In situ Measurements ICOADS, IVAD, Res Cruises VOS and buoys offer direct measurements. Sparse data coverage (esp south of 30S. Changes in measurement techniques (e.g. shipboard anemometer height).

  5. Translating PI observing proposals into ALMA observing scripts

    NASA Astrophysics Data System (ADS)

    Liszt, Harvey S.

    2014-08-01

    The ALMA telescope is a complex 66-antenna array working in the specialized domain of mm- and sub-mm aperture synthesis imaging. To make ALMA accessible to technically inexperienced but scientifically expert users, the ALMA Observing Tool (OT) has been developed. Using the OT, scientifically oriented user input is formatted as observing proposals that are packaged for peer-review and assessment of technical feasibility. If accepted, the proposal's scientifically oriented inputs are translated by the OT into scheduling blocks, which function as input to observing scripts for the telescope's online control system. Here I describe the processes and practices by which this translation from PI scientific goals to online control input and schedule block execution actually occurs.

  6. Observations-based GPP estimates

    NASA Astrophysics Data System (ADS)

    Joiner, J.; Yoshida, Y.; Jung, M.; Tucker, C. J.; Pinzon, J. E.

    2017-12-01

    We have developed global estimates of gross primary production based on a relatively simple satellite observations-based approach using reflectance data from the MODIS instruments in the form of vegetation indices that provide information about photosynthetic capacity at both high temporal and spatial resolution and combined with information from chlorophyll solar-induced fluorescence from the Global Ozone Monitoring Experiment-2 instrument that is noisier and available only at lower temporal and spatial scales. We compare our gross primary production estimates with those from eddy covariance flux towers and show that they are competitive with more complicated extrapolated machine learning gross primary production products. Our results provide insight into the amount of variance in gross primary production that can be explained with satellite observations data and also show how processing of the satellite reflectance data is key to using it for accurate GPP estimates.

  7. Optimal quantum observables

    NASA Astrophysics Data System (ADS)

    Haapasalo, Erkka; Pellonpää, Juha-Pekka

    2017-12-01

    Various forms of optimality for quantum observables described as normalized positive-operator-valued measures (POVMs) are studied in this paper. We give characterizations for observables that determine the values of the measured quantity with probabilistic certainty or a state of the system before or after the measurement. We investigate observables that are free from noise caused by classical post-processing, mixing, or pre-processing of quantum nature. Especially, a complete characterization of pre-processing and post-processing clean observables is given, and necessary and sufficient conditions are imposed on informationally complete POVMs within the set of pure states. We also discuss joint and sequential measurements of optimal quantum observables.

  8. Describing the apprenticeship of chemists through the language of faculty scientists

    NASA Astrophysics Data System (ADS)

    Skjold, Brandy Ann

    Attempts to bring authentic science into the K-16 classroom have led to the use of sociocultural theories of learning, particularly apprenticeship, to frame science education research. Science educators have brought apprenticeship to science classrooms and have brought students to research laboratories in order to gauge its benefits. The assumption is that these learning opportunities are representative of the actual apprenticeship of scientists. However, there have been no attempts in the literature to describe the apprenticeship of scientists using apprenticeship theory. Understanding what science apprenticeship looks like is a critical component of translating this experience into the classroom. This study sought to describe and analyze the apprenticeship of chemists through the talk of faculty scientists. It used Lave and Wenger’s (1991) theory of Legitimate Peripheral Participation as its framework, concentrating on describing the roles of the participants, the environment and the tasks in the apprenticeship, as per Barab, Squire and Dueber (2000). A total of nine chemistry faculty and teaching assistants were observed across 11 settings representing a range of learning experiences from introductory chemistry lectures to research laboratories. All settings were videotaped, focusing on the instructor. About 89 hours of video was taken, along with observer field notes. All videos were transcribed and transcriptions and field notes were analyzed qualitatively as a broad level discourse analysis. Findings suggest that learners are expected to know basic chemistry content and how to use basic research equipment before entering the research lab. These are taught extensively in classroom settings. However, students are also required to know how to use the literature base to inform their own research, though they were rarely exposed to this in the classrooms. In all settings, conflicts occurred when student under or over-estimated their role in the learning

  9. Drinking behavior in nursery pigs: Determining the accuracy between an automatic water meter versus human observers

    USDA-ARS?s Scientific Manuscript database

    Assimilating accurate behavioral events over a long period can be labor intensive and relatively expensive. If an automatic device could accurately record the duration and frequency for a given behavioral event, it would be a valuable alternative to the traditional use of human observers for behavio...

  10. Dermoscopic patterns of Melanoma Metastases: inter-observer consistency and accuracy for metastases recognition

    PubMed Central

    Costa, J.; Ortiz-Ibañez, K.; Salerni, G.; Borges, V.; Carrera, C.; Puig, S.; Malvehy, J.

    2013-01-01

    Background Cutaneous metastases of malignant melanoma (CMMM) can be confused with other skin lesions. Dermoscopy could be helpful in the differential diagnosis. Objective To describe distinctive dermoscopic patterns that are reproducible and accurate in the identification of CMMM Methods A retrospective study of 146 dermoscopic images of CMMM from 42 patients attending a Melanoma Unit between 2002 and 2009 was performed. Firstly, two investigators established six dermoscopic patterns for CMMM. The correlation of 73 dermoscopic images with their distinctive patterns was assessed by four independent dermatologists to evaluate the reproducibility in the identification of the patterns. Finally, 163 dermoscopic images, including CMMM and non-metastatic lesions, were evaluated by the same four dermatologists to calculate the accuracy of the patterns in the recognition of CMMM. Results Five CMMM dermoscopic patterns had a good inter-observer agreement (blue nevus-like, nevus-like, angioma like, vascular and unspecific). When CMMM were classified according to these patterns, correlation between the investigators and the four dermatologists ranged from κ = 0.56 to 0.7. 71 CMMM, 16 angiomas, 22 blue nevus, 15 malignant melanoma, 11 seborrheic keratosis, 15 melanocytic nevus with globular pattern and 13 pink lesions with vascular pattern were evaluated according to the previously described CMMM dermoscopy patterns, showing an overall sensitivity of 68% (between 54.9–76%) and a specificity of 81% (between 68.6–93.5) for the diagnosis of CMMM. Conclusion Five dermoscopic patterns of CMMM with good inter-observer agreement obtained a high sensitivity and specificity in the diagnosis of metastasis, the accuracy varying according to the experience of the observer. PMID:23495915

  11. dropEst: pipeline for accurate estimation of molecular counts in droplet-based single-cell RNA-seq experiments.

    PubMed

    Petukhov, Viktor; Guo, Jimin; Baryawno, Ninib; Severe, Nicolas; Scadden, David T; Samsonova, Maria G; Kharchenko, Peter V

    2018-06-19

    Recent single-cell RNA-seq protocols based on droplet microfluidics use massively multiplexed barcoding to enable simultaneous measurements of transcriptomes for thousands of individual cells. The increasing complexity of such data creates challenges for subsequent computational processing and troubleshooting of these experiments, with few software options currently available. Here, we describe a flexible pipeline for processing droplet-based transcriptome data that implements barcode corrections, classification of cell quality, and diagnostic information about the droplet libraries. We introduce advanced methods for correcting composition bias and sequencing errors affecting cellular and molecular barcodes to provide more accurate estimates of molecular counts in individual cells.

  12. Applying conversation analysis to foster accurate reporting in the diet history interview.

    PubMed

    Tapsell, L C; Brenninger, V; Barnard, J

    2000-07-01

    Inaccuracy in reporting dietary intakes is a major problem in managing diet-related disease. There is no single best method of dietary assessment, but the diet history lends itself well to the clinical setting. In many diet histories data are collected orally, so analysis of interviews can provide insights into reporting behaviors. Conversation analysis is a qualitative method that describes the systematic organization of talk between people. Patterns are identified and checked for consistency within and among individual interviews. The aim of this study was to describe consistent ways of reporting diet histories and to identify conversational features of problematic reporting. Diet history interviews from 62 overweight and insulin-resistant adult volunteers (50 women, 12 men) attending an outpatient clinic and 14 healthy volunteers (7 men, 7 women) participating in an energy balance study were audiotaped and transcribed. Conversation analysis identified a remarkably consistent pattern of reporting diet histories and 3 conversational features that indicated problematic reporting: "it depends," denoting variability (least of all at breakfast); "probably," suggesting guesswork (related to portion sizes); and elaborated talk on certain foods, distinguishing sensitive topics (e.g., alcohol, chocolate, butter/margarine, take-out foods) from safe topics. These findings indicate that there are ways in which dietetics practitioners may conduct the diet history interview to foster more accurate reporting.

  13. Factorization of Observables

    NASA Astrophysics Data System (ADS)

    Eliaš, Peter; Frič, Roman

    2017-12-01

    Categorical approach to probability leads to better understanding of basic notions and constructions in generalized (fuzzy, operational, quantum) probability, where observables—dual notions to generalized random variables (statistical maps)—play a major role. First, to avoid inconsistencies, we introduce three categories L, S, and P, the objects and morphisms of which correspond to basic notions of fuzzy probability theory and operational probability theory, and describe their relationships. To illustrate the advantages of categorical approach, we show that two categorical constructions involving observables (related to the representation of generalized random variables via products, or smearing of sharp observables, respectively) can be described as factorizing a morphism into composition of two morphisms having desired properties. We close with a remark concerning products.

  14. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    MedlinePlus

    ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts Seasonal allergic rhinitis known as hay fever is ... hay fever symptoms, it is important to monitor pollen counts so you can limit your exposure on days ...

  15. How do consumers describe wine astringency?

    PubMed

    Vidal, Leticia; Giménez, Ana; Medina, Karina; Boido, Eduardo; Ares, Gastón

    2015-12-01

    Astringency is one of the most important sensory characteristics of red wine. Although a hierarchically structured vocabulary to describe the mouthfeel sensations of red wine has been proposed, research on consumers' astringency vocabulary is lacking. In this context, the aim of this work was to gain an insight on the vocabulary used by wine consumers to describe the astringency of red wine and to evaluate the influence of wine involvement on consumers' vocabulary. One hundred and twenty-five wine consumers completed and on-line survey with five tasks: an open-ended question about the definition of wine astringency, free listing the sensations perceived when drinking an astringent wine, free listing the words they would use to describe the astringency of a red wine, a CATA question with 44 terms used in the literature to describe astringency, and a wine involvement questionnaire. When thinking about wine astringency consumers freely elicited terms included in the Mouth-feel Wheel, such as dryness and harsh. The majority of the specific sub-qualities of the Mouth-feel Wheel were not included in consumer responses. Also, terms not classified as astringency descriptors were elicited (e.g. acid and bitter). Only 17 out of the 31 terms from the Mouth-feel Wheel were used by more than 10% of participants when answering the CATA question. There were no large differences in the responses of consumer segments with different wine involvement. Results from the present work suggest that most of the terms of the Mouth-feel Wheel might not be adequate to communicate the astringency characteristics of red wine to consumers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. [Study on Accurately Controlling Discharge Energy Method Used in External Defibrillator].

    PubMed

    Song, Biao; Wang, Jianfei; Jin, Lian; Wu, Xiaomei

    2016-01-01

    This paper introduces a new method which controls discharge energy accurately. It is achieved by calculating target voltage based on transthoracic impedance and accurately controlling charging voltage and discharge pulse width. A new defibrillator is designed and programmed using this method. The test results show that this method is valid and applicable to all kinds of external defibrillators.

  17. New Solar PV Tool Accurately Calculates Degradation Rates, Saving Money and

    Science.gov Websites

    Guiding Business Decisions | News | NREL New Solar PV Tool Accurately Calculates Degradation Rates, Saving Money and Guiding Business Decisions News Release: New Solar PV Tool Accurately Calculates ; said Dirk Jordan, engineer and solar PV researcher at NREL. "We spent years building consensus in

  18. Physical Function After Total Knee Replacement: An Observational Study Describing Outcomes in a Small Group of Women From China and the United States.

    PubMed

    White, Daniel K; Li, Zhichang; Zhang, Yuqing; Marmon, Adam R; Master, Hiral; Zeni, Joseph; Niu, Jingbo; Jiang, Long; Zhang, Shu; Lin, Jianhao

    2018-01-01

    To describe physical function before and six months after Total Knee Replacement (TKR) in a small sample of women from China and the United States. Observational. Community environment. Both groups adhered to the Osteoarthritis Research Society International (OARSI) protocols for the 6-minute walk and 30-second chair stand. We compared physical function prior to TKR and 6 months after using linear regression adjusted for covariates. Women (N=60) after TKR. Not applicable. Age and body mass index in the China group (n=30; 66y and 27.0kg/m 2 ) were similar to those in the U.S. group (n=30; 65y and 29.6kg/m 2 ). Before surgery, the China group walked 263 (95% confidence interval [CI], -309 to -219) less meters and had 10.2 (95% CI, -11.8 to -8.5) fewer chair stands than the U.S. group. At 6 months when compared with the U.S. group, the China group walked 38 more meters, but this difference did not reach statistical significance (95% CI, -1.6 to 77.4), and had 3.1 (95% CI, -4.4 to -1.7) fewer chair stands. The China group had greater improvement in the 6-minute walk test than did the U.S. group (P<.001). Despite having worse physical function before TKR, the China group had greater gains in walking endurance and similar gains in repeated chair stands than did the U.S. group after surgery. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  19. Investigation into accurate mass capability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, with respect to radical ion species.

    PubMed

    Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth

    2006-05-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) has been shown to be an effective technique for the characterization of organometallic, coordination, and highly conjugated compounds. The preferred matrix is 2-[(2E)-3-(4-tert-butylphenyl)-2-methylprop-2-enylidene]malononitrile (DCTB), with radical ions observed. However, MALDI-TOFMS is generally not favored for accurate mass measurement. A specific method had to be developed for such compounds to assure the quality of our accurate mass results. Therefore, in this preliminary study, two methods of data acquisition, and both even-electron (EE+) ion and odd-electron (OE+.) radical ion mass calibration standards, have been investigated to establish the basic measurement technique. The benefit of this technique is demonstrated for a copper compound for which ions were observed by MALDI, but not by electrospray (ESI) or liquid secondary ion mass spectrometry (LSIMS); a mean mass accuracy error of -1.2 ppm was obtained.

  20. Foundations of Observation: Considerations for Developing a Classroom Observation System That Helps Districts Achieve Consistent and Accurate Scores. MET Project, Policy and Practice Brief

    ERIC Educational Resources Information Center

    Joe, Jilliam N.; Tocci, Cynthia M.; Holtzman, Steven L.; Williams, Jean C.

    2013-01-01

    The purpose of this paper is to provide states and school districts with processes they can use to help ensure high-quality data collection during teacher observations. Educational Testing Service's (ETS's) goal in writing it is to share the knowledge and expertise they gained: (1) from designing and implementing scoring processes for the Measures…

  1. Estimation of snow in extratropical cyclones from multiple frequency airborne radar observations. An Expectation-Maximization approach

    NASA Astrophysics Data System (ADS)

    Grecu, M.; Tian, L.; Heymsfield, G. M.

    2017-12-01

    A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar

  2. Pauling's electronegativity equation and a new corollary accurately predict bond dissociation enthalpies and enhance current understanding of the nature of the chemical bond.

    PubMed

    Matsunaga, Nikita; Rogers, Donald W; Zavitsas, Andreas A

    2003-04-18

    Contrary to other recent reports, Pauling's original electronegativity equation, applied as Pauling specified, describes quite accurately homolytic bond dissociation enthalpies of common covalent bonds, including highly polar ones, with an average deviation of +/-1.5 kcal mol(-1) from literature values for 117 such bonds. Dissociation enthalpies are presented for more than 250 bonds, including 79 for which experimental values are not available. Some previous evaluations of accuracy gave misleadingly poor results by applying the equation to cases for which it was not derived and for which it should not reproduce experimental values. Properly interpreted, the results of the equation provide new and quantitative insights into many facets of chemistry such as radical stabilities, factors influencing reactivity in electrophilic aromatic substitutions, the magnitude of steric effects, conjugative stabilization in unsaturated systems, rotational barriers, molecular and electronic structure, and aspects of autoxidation. A new corollary of the original equation expands its applicability and provides a rationale for previously observed empirical correlations. The equation raises doubts about a new bonding theory. Hydrogen is unique in that its electronegativity is not constant.

  3. Discrete sensors distribution for accurate plantar pressure analyses.

    PubMed

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  5. 50 CFR 600.746 - Observers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF COMMERCE MAGNUSON-STEVENS ACT PROVISIONS General Provisions for Domestic Fisheries § 600.746... accompany the observer in a walk through the vessel's major spaces to ensure that no obviously hazardous... observer, as described in paragraph (c) of this section, and for allowing operation of normal observer...

  6. 50 CFR 600.746 - Observers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF COMMERCE MAGNUSON-STEVENS ACT PROVISIONS General Provisions for Domestic Fisheries § 600.746... accompany the observer in a walk through the vessel's major spaces to ensure that no obviously hazardous... observer, as described in paragraph (c) of this section, and for allowing operation of normal observer...

  7. Wringing the last drop of optically stimulated luminescence response for accurate dating of glacial sediments

    NASA Astrophysics Data System (ADS)

    Medialdea, Alicia; Bateman, Mark D.; Evans, David J.; Roberts, David H.; Chiverrell, Richard C.; Clark, Chris D.

    2017-04-01

    BRITICE-CHRONO is a NERC-funded consortium project of more than 40 researchers aiming to establish the retreat patterns of the last British and Irish Ice Sheet. For this purpose, optically stimulated luminescence (OSL) dating, among other dating techniques, has been used in order to establish accurate chronology. More than 150 samples from glacial environments have been dated and provide key information for modelling of the ice retreat. Nevertheless, luminescence dating of glacial sediments has proven to be challenging: first, glacial sediments were often affected by incomplete bleaching and secondly, quartz grains within the sediments sampled were often characterized by complex luminescence behaviour; characterized by dim signal and low reproducibility. Specific statistical approaches have been used to over come the former to enable the estimated ages to be based on grain populations most likely to have been well bleached. This latest work presents how issues surrounding complex luminescence behaviour were over-come in order to obtain accurate OSL ages. This study has been performed on two samples of bedded sand originated on an ice walled lake plain, in Lincolnshire, UK. Quartz extracts from each sample were artificially bleached and irradiated to known doses. Dose recovery tests have been carried out under different conditions to study the effect of: preheat temperature, thermal quenching, contribution of slow components, hot bleach after a measuring cycles and IR stimulation. Measurements have been performed on different luminescence readers to study the possible contribution of instrument reproducibility. These have shown that a great variability can be observed not only among the studied samples but also within a specific site and even a specific sample. In order to determine an accurate chronology and realistic uncertainties to the estimated ages, this variability must be taken into account. Tight acceptance criteria to measured doses from natural, not

  8. Satellite observations of ground water changes in New Mexico

    USDA-ARS?s Scientific Manuscript database

    In 2002 NASA launched the Gravity Recovery and Climate Experiment (GRACE) satellite mission. GRACE consists of two satellites with a separation of about 200 km.  By accurately measuring the separation between the twin satellites, the differences in the gravity field can be determined. Monthly observ...

  9. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Threemore » methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and

  10. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  11. The adaptive observer. [liapunov synthesis, single-input single-output, and reduced observers

    NASA Technical Reports Server (NTRS)

    Carroll, R. L.

    1973-01-01

    The simple generation of state from available measurements, for use in systems for which the criteria defining the acceptable state behavior mandates a control that is dependent upon unavailable measurement is described as an adaptive means for determining the state of a linear time invariant differential system having unknown parameters. A single input output adaptive observer and the reduced adaptive observer is developed. The basic ideas for both the adaptive observer and the nonadaptive observer are examined. A survey of the Liapunov synthesis technique is taken, and the technique is applied to adaptive algorithm for the adaptive observer.

  12. Evaluating the application of multi-satellite observation in hydrologic modeling

    USDA-ARS?s Scientific Manuscript database

    When monitoring local or regional hydrosphere dynamics for applications such as agricultural productivity or drought and flooding events, it is necessary to have accurate, high-resolution estimates of terrestrial water and energy storages. Though in-situ observations provide reliable estimates of hy...

  13. Mechanism for accurate, protein-assisted DNA annealing by Deinococcus radiodurans DdrB

    PubMed Central

    Sugiman-Marangos, Seiji N.; Weiss, Yoni M.; Junop, Murray S.

    2016-01-01

    Accurate pairing of DNA strands is essential for repair of DNA double-strand breaks (DSBs). How cells achieve accurate annealing when large regions of single-strand DNA are unpaired has remained unclear despite many efforts focused on understanding proteins, which mediate this process. Here we report the crystal structure of a single-strand annealing protein [DdrB (DNA damage response B)] in complex with a partially annealed DNA intermediate to 2.2 Å. This structure and supporting biochemical data reveal a mechanism for accurate annealing involving DdrB-mediated proofreading of strand complementarity. DdrB promotes high-fidelity annealing by constraining specific bases from unauthorized association and only releases annealed duplex when bound strands are fully complementary. To our knowledge, this mechanism provides the first understanding for how cells achieve accurate, protein-assisted strand annealing under biological conditions that would otherwise favor misannealing. PMID:27044084

  14. STRengthening Analytical Thinking for Observational Studies: the STRATOS initiative

    PubMed Central

    Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James

    2014-01-01

    The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even ‘standard’ analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. PMID:25074480

  15. STRengthening analytical thinking for observational studies: the STRATOS initiative.

    PubMed

    Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James

    2014-12-30

    The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even 'standard' analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  16. Monitoring circuit accurately measures movement of solenoid valve

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1966-01-01

    Solenoid operated valve in a control system powered by direct current issued to accurately measure the valve travel. This system is currently in operation with a 28-vdc power system used for control of fluids in liquid rocket motor test facilities.

  17. Symptoms of change in multi-scale observations of arctic ecosystem carbon cycling

    NASA Astrophysics Data System (ADS)

    Stoy, P. C.; Williams, M. D.; Hartley, I. P.; Street, L.; Hill, T. C.; Prieto-Blanco, A.; Wayolle, A.; Disney, M.; Evans, J.; Fletcher, B.; Poyatos, R.; Wookey, P.; Merbold, L.; Wade, T. J.; Moncrieff, J.

    2009-12-01

    Arctic ecosystems are responding rapidly to observed climate change. Quantifying the magnitude of these changes, and their implications for the climate system, requires observations of their current structure and function, as well as extrapolation and modelling (i.e. ‘upscaling’) across time and space. Here, we describe the major results of the International Polar Year (IPY) ABACUS project, a multi-scale investigation across arctic Fennoscandia that couples plant and soil process studies, isotope analyses, flux and micrometeorological measurements, process modelling, and aircraft and satellite observations to improve predictions of the response of the arctic terrestrial biosphere to global change. We begin with a synthesis of eddy covariance observations from the global FLUXNET database. We demonstrate that a simple model parameterized using pan-arctic chamber measurements explains over 80% of the variance of half-hourly CO2 fluxes during the growing season across most arctic and montane tundra ecosystems given accurate measurements of leaf area index (LAI), which agrees with the recently proposed ‘functional convergence’ paradigm for tundra vegetation. The ability of MODIS to deliver accurate LAI estimates is briefly discussed and an adjusted algorithm is presented and validated using direct observations. We argue for an Information Theory-based framework for upscaling in Earth science by conceptualizing multi-scale research as a transfer of information across scales. We then demonstrate how error in upscaled arctic C flux estimates can be reduced to less than 4% from their high-resolution counterpart by formally preserving the information content of high spatial and spectral resolution aircraft and satellite imagery. Jaynes’ classic Maximum Entropy (MaxEnt) principle is employed to incorporate logical, biological and physical constraints to reduce error in downscaled flux estimates. Errors are further reduced by assimilating flux, biological and remote

  18. ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.

    PubMed

    Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P

    2016-11-01

    ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P < 0.0001). 394 (61.2%) of physicians' estimates about the percentage probability of post-thrombolysis symptomatic intracranial haemorrhage were accurate compared with 583 (90.5%) of SEDAN score estimates (P < 0.0001). 160 (24.8%) of physicians' estimates about post-thrombolysis 3-month percentage probability of mRS 0-2 were accurate compared with 240 (37.3%) DRAGON score estimates (P < 0.0001). 260 (40.4%) of physicians' estimates about the percentage probability of post-thrombolysis mRS 5-6 were accurate compared with 518 (80.4%) DRAGON score estimates (P < 0.0001). ASTRAL, DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.

  19. Improving the Transition of Earth Satellite Observations from Research to Operations

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Lapenta, William M.; Jedlovec, Gary J.

    2004-01-01

    There are significant gaps between the observations, models, and decision support tools that make use of new data. These challenges include: 1) Decreasing the time to incorporate new satellite data into operational forecast assimilation systems, 2) Blending in-situ and satellite observing systems to produce the most accurate and comprehensive data products and assessments, 3) Accelerating the transition from research to applications through national test beds, field campaigns, and pilot demonstrations, and 4) Developing the partnerships and organizational structures to effectively transition new technology into operations. At the Short-term Prediction Research and Transition (SPORT) Center in Huntsville, Alabama, a NASA-NOAA-University collaboration has been developed to accelerate the infusion of NASA Earth science observations, data assimilation and modeling research into NWS forecast operations and decision-making. The SPoRT Center research focus is to improve forecasts through new observation capability and the regional prediction objectives of the US Weather Research Program dealing with 0-1 day forecast issues such as convective initiation and 24-hr quantitative precipitation forecasting. The near real-time availability of high-resolution experimental products of the atmosphere, land, and ocean from the Moderate Resolution Imaging Spectroradiometer (MODIS), the Advanced Infrared Spectroradiometer (AIRS), and lightning mapping systems provide an opportunity for science and algorithm risk reduction, and for application assessment prior to planned observations from the next generation of operational low Earth orbiting and geostationary Earth orbiting satellites. This paper describes the process for the transition of experimental products into forecast operations, current products undergoing assessment by forecasters, and plans for the future. The SPoRT Web page is at (http://www.ghcc.msfc.nasa.gov/sport).

  20. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    NASA Astrophysics Data System (ADS)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  1. Stimulating Interest in Natural Sciences and Training Observation Skills: The UAP Observations Reporting Scheme

    NASA Astrophysics Data System (ADS)

    Ailleris, P.

    2012-04-01

    how to record as accurately as possible a UAP event, in order to facilitate future identification and study. Lastly, one of the project's objectives is also to collect reports of trained observers (astronomers) of apparently inexplicable events for further analysis. Certainly, whenever there are unexplained observations there is the possibility that scientists could learn something new by studying these events. During this presentation, we will provide an overview of the project, present the website's extensive and well illustrated list of misidentifications, describe how people can further check details, develop their knowledge (e.g. satellite paths, stars/planets charts, characteristics of meteors, pictures of sprites, clouds classification) and enhance their observation skills. In order to show the relevance of the project, a short illustrated list of UAP cases received by the project will be featured, both explained and inexplicable. Finally, we will explore potential plans for strengthening the visibility and usefulness of the project, while requesting feedback from the community of atmospheric and natural sciences' researchers. (1) www.uapreporting.org (*): Disclaimer: Work undertaken as personal work; not endorsed as research activity by ESA.

  2. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  3. Combination of Wavefunction and Density Functional Approximations for Describing Electronic Correlation

    NASA Astrophysics Data System (ADS)

    Garza, Alejandro J.

    history in quantum chemistry (practical implementations have appeared in the literature since the 1970s). However, this kind of techniques have not achieved widespread use due to problems such as double counting of correlation and the symmetry dilemma--the fact that wavefunction methods respect the symmetries of Hamiltonian, while modern functionals are designed to work with broken symmetry densities. Here, particular mathematical features of pCCD and CCD0 are exploited to avoid these problems in an efficient manner. The two resulting families of approximations, denoted as pCCD+DFT and CCD0+DFT, are shown to be able to describe static and dynamic correlation in standard benchmark calculations. Furthermore, it is also shown that CCD0+DFT lends itself to combination with correlation from the direct random phase approximation (dRPA). Inclusion of dRPA in the long-range via the technique of range-separation allows for the description of dispersion correlation, the remaining part of the correlation. Thus, when combined with the dRPA, CCD0+DFT can account for all three-types of electron correlation that are necessary to accurately describe molecular systems. Lastly, applications of CCD0+DFT to actinide chemistry are considered in this work. The accuracy of CCD0+DFT for predicting equilibrium geometries and vibrational frequencies of actinide molecules and ions is assessed and compared to that of well-established quantum chemical methods. For this purpose, the f0 actinyl series (UO2 2+, NpO 23+, PuO24+, the isoelectronic NUN, and Thorium (ThO, ThO2+) and Nobelium (NoO, NoO2) oxides are studied. It is shown that the CCD0+DFT description of these species agrees with available experimental data and is comparable with the results given by the highest-level calculations that are possible for such heavy compounds while being, at least, an order of magnitude lower in computational cost.

  4. Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method

    NASA Astrophysics Data System (ADS)

    Habel, Branislav; Janak, Juraj

    2014-05-01

    A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.

  5. 48 CFR 552.215-72 - Price Adjustment-Failure To Provide Accurate Information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Price Adjustment-Failure... Provisions and Clauses 552.215-72 Price Adjustment—Failure To Provide Accurate Information. As prescribed in 515.408(d), insert the following clause: Price Adjustment—Failure To Provide Accurate Information (AUG...

  6. How accurately do force fields represent protein side chain ensembles?

    PubMed

    Petrović, Dušan; Wang, Xue; Strodel, Birgit

    2018-05-23

    Although the protein backbone is the most fundamental part of the structure, the fine-tuning of side-chain conformations is important for protein function, for example, in protein-protein and protein-ligand interactions, and also in enzyme catalysis. While several benchmarks testing the performance of protein force fields for side chain properties have already been published, they often considered only a few force fields and were not tested against the same experimental observables; hence, they are not directly comparable. In this work, we explore the ability of twelve force fields, which are different flavors of AMBER, CHARMM, OPLS, or GROMOS, to reproduce average rotamer angles and rotamer populations obtained from extensive NMR studies of the 3 J and residual dipolar coupling constants for two small proteins: ubiquitin and GB3. Based on a total of 196 μs sampling time, our results reveal that all force fields identify the correct side chain angles, while the AMBER and CHARMM force fields clearly outperform the OPLS and GROMOS force fields in estimating rotamer populations. The three best force fields for representing the protein side chain dynamics are AMBER 14SB, AMBER 99SB*-ILDN, and CHARMM36. Furthermore, we observe that the side chain ensembles of buried amino acid residues are generally more accurately represented than those of the surface exposed residues. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  7. Generating and Describing Affective Eye Behaviors

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Zheng

    The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.

  8. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  9. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory.

  10. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.

    1995-01-01

    We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.

  11. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, S.A.; Killeen, K.P.; Lear, K.L.

    1995-03-14

    The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.

  12. Quasistatic limit of the strong-field approximation describing atoms in intense laser fields: Circular polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Jaroslaw H.

    2011-03-15

    In the recent work of Vanne and Saenz [Phys. Rev. A 75, 063403 (2007)] the quasistatic limit of the velocity gauge strong-field approximation describing the ionization rate of atomic or molecular systems exposed to linearly polarized laser fields was derived. It was shown that in the low-frequency limit the ionization rate is proportional to the laser frequency {omega} (for a constant intensity of the laser field). In the present work I show that for circularly polarized laser fields the ionization rate is proportional to {omega}{sup 4} for H(1s) and H(2s) atoms, to {omega}{sup 6} for H(2p{sub x}) and H(2p{sub y})more » atoms, and to {omega}{sup 8} for H(2p{sub z}) atoms. The analytical expressions for asymptotic ionization rates (which become nearly accurate in the limit {omega}{yields}0) contain no summations over multiphoton contributions. For very low laser frequencies (optical or infrared) these expressions usually remain with an order-of-magnitude agreement with the velocity gauge strong-field approximation.« less

  13. Indicators of hearing protection use: self-report and researcher observation.

    PubMed

    Griffin, Stephanie C; Neitzel, Richard; Daniell, William E; Seixas, Noah S

    2009-10-01

    Hearing protection devices (HPD) are commonly used to prevent occupational noise-induced hearing loss. There is a large body of research on hearing protection use in industry, and much of it relies on workers' self-reported use of hearing protection. Based on previous studies in fixed industry, worker self-report has been accepted as an adequate and reliable tool to measure this behavior among workers in many industrial sectors. However, recent research indicates self-reported hearing protection use may not accurately reflect subject behavior in industries with variable noise exposure. This study compares workers' self-reported use of hearing protection with their observed use in three workplaces with two types of noise environments: one construction site and one fixed industry facility with a variable noise environment, and one fixed industry facility with a steady noise environment. Subjects reported their use of hearing protection on self-administered surveys and activity cards, which were validated using researcher observations. The primary outcome of interest in the study was the difference between the self-reported use of hearing protection in high noise on the activity card and survey: (1) over one workday, and (2) over a 2-week period. The primary hypotheses for the study were that subjects in workplaces with variable noise environments would report their use of HPDs less accurately than subjects in the stable noise environment, and that reporting would be less accurate over 2 weeks than over 1 day. In addition to noise variability, other personal and workplace factors thought to affect the accuracy of self-reported hearing protection use were also analyzed. This study found good agreement between subjects' self-reported HPD use and researcher observations. Workers in the steady noise environment self-reported hearing protection use more accurately on the surveys than workers in variable noise environments. The findings demonstrate the potential importance

  14. Helicopter flight dynamics simulation with a time-accurate free-vortex wake model

    NASA Astrophysics Data System (ADS)

    Ribera, Maria

    This dissertation describes the implementation and validation of a coupled rotor-fuselage simulation model with a time-accurate free-vortex wake model capable of capturing the response to maneuvers of arbitrary amplitude. The resulting model has been used to analyze different flight conditions, including both steady and transient maneuvers. The flight dynamics model is based on a system of coupled nonlinear rotor-fuselage differential equations in first-order, state-space form. The rotor model includes flexible blades, with coupled flap-lag-torsion dynamics and swept tips; the rigid body dynamics are modeled with the non-linear Euler equations. The free wake models the rotor flow field by tracking the vortices released at the blade tips. Their behavior is described by the equations of vorticity transport, which is approximated using finite differences, and solved using a time-accurate numerical scheme. The flight dynamics model can be solved as a system of non-linear algebraic trim equations to determine the steady state solution, or integrated in time in response to pilot-applied controls. This study also implements new approaches to reduce the prohibitive computational costs associated with such complex models without losing accuracy. The mathematical model was validated for trim conditions in level flight, turns, climbs and descents. The results obtained correlate well with flight test data, both in level flight as well as turning and climbing and descending flight. The swept tip model was also found to improve the trim predictions, particularly at high speed. The behavior of the rigid body and the rotor blade dynamics were also studied and related to the aerodynamic load distributions obtained with the free wake induced velocities. The model was also validated in a lateral maneuver from hover. The results show improvements in the on-axis prediction, and indicate a possible relation between the off-axis prediction and the lack of rotor-body interaction

  15. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ianjamasimanana, R.; Blok, W. J. G. de; Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by Themore » H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.« less

  16. Incorporating Parallel Computing into the Goddard Earth Observing System Data Assimilation System (GEOS DAS)

    NASA Technical Reports Server (NTRS)

    Larson, Jay W.

    1998-01-01

    Atmospheric data assimilation is a method of combining actual observations with model forecasts to produce a more accurate description of the earth system than the observations or forecast alone can provide. The output of data assimilation, sometimes called the analysis, are regular, gridded datasets of observed and unobserved variables. Analysis plays a key role in numerical weather prediction and is becoming increasingly important for climate research. These applications, and the need for timely validation of scientific enhancements to the data assimilation system pose computational demands that are best met by distributed parallel software. The mission of the NASA Data Assimilation Office (DAO) is to provide datasets for climate research and to support NASA satellite and aircraft missions. The system used to create these datasets is the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The core components of the the GEOS DAS are: the GEOS General Circulation Model (GCM), the Physical-space Statistical Analysis System (PSAS), the Observer, the on-line Quality Control (QC) system, the Coupler (which feeds analysis increments back to the GCM), and an I/O package for processing the large amounts of data the system produces (which will be described in another presentation in this session). The discussion will center on the following issues: the computational complexity for the whole GEOS DAS, assessment of the performance of the individual elements of GEOS DAS, and parallelization strategy for some of the components of the system.

  17. Accurate predictions of population-level changes in sequence and structural properties of HIV-1 Env using a volatility-controlled diffusion model

    PubMed Central

    DeLeon, Orlando; Hodis, Hagit; O’Malley, Yunxia; Johnson, Jacklyn; Salimi, Hamid; Zhai, Yinjie; Winter, Elizabeth; Remec, Claire; Eichelberger, Noah; Van Cleave, Brandon; Puliadi, Ramya; Harrington, Robert D.; Stapleton, Jack T.; Haim, Hillel

    2017-01-01

    The envelope glycoproteins (Envs) of HIV-1 continuously evolve in the host by random mutations and recombination events. The resulting diversity of Env variants circulating in the population and their continuing diversification process limit the efficacy of AIDS vaccines. We examined the historic changes in Env sequence and structural features (measured by integrity of epitopes on the Env trimer) in a geographically defined population in the United States. As expected, many Env features were relatively conserved during the 1980s. From this state, some features diversified whereas others remained conserved across the years. We sought to identify “clues” to predict the observed historic diversification patterns. Comparison of viruses that cocirculate in patients at any given time revealed that each feature of Env (sequence or structural) exists at a defined level of variance. The in-host variance of each feature is highly conserved among individuals but can vary between different HIV-1 clades. We designate this property “volatility” and apply it to model evolution of features as a linear diffusion process that progresses with increasing genetic distance. Volatilities of different features are highly correlated with their divergence in longitudinally monitored patients. Volatilities of features also correlate highly with their population-level diversification. Using volatility indices measured from a small number of patient samples, we accurately predict the population diversity that developed for each feature over the course of 30 years. Amino acid variants that evolved at key antigenic sites are also predicted well. Therefore, small “fluctuations” in feature values measured in isolated patient samples accurately describe their potential for population-level diversification. These tools will likely contribute to the design of population-targeted AIDS vaccines by effectively capturing the diversity of currently circulating strains and addressing properties

  18. Accurate determination of aldehydes in amine catalysts or amines by 2,4-dinitrophenylhydrazine derivatization.

    PubMed

    Barman, Bhajendra N

    2014-01-31

    Carbonyl compounds, specifically aldehydes, present in amine catalysts or amines are determined by reversed-phase liquid chromatography using ultraviolet detection of their corresponding 2,4-dinitrophenylhydrazones. The primary focus has been to establish optimum conditions for determining aldehydes accurately because these add exposure concerns when the amine catalysts are used to manufacture polyurethane products. Concentrations of aldehydes determined by this method are found to vary with the pH of the aqueous amine solution and the derivatization time, the latter being problematic when the derivatization reaction proceeds slowly and not to completion in neutral and basic media. Accurate determination of aldehydes in amines through derivatization can be carried out at an effective solution pH of about 2 and with derivatization time of 20min. Hydrochloric acid has been used for neutralization of an amine. For complete derivatization, it is essential to protonate all nitrogen atoms in the amine. An approach for the determination of an adequate amount of acid needed for complete derivatization has been described. Several 0.2M buffer solutions varying in pH from 4 to 8 have also been used to make amine solutions for carrying out derivatization of aldehydes. These solutions have effective pHs of 10 or higher and provide much lower aldehyde concentrations compared to their true values. Mechanisms for the formation of 2,4-dinitrophenylhydrazones in both acidic and basic media are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. New optical package and algorithms for accurate estimation and interactive recording of the cloud cover information over land and sea

    NASA Astrophysics Data System (ADS)

    Krinitskiy, Mikhail; Sinitsyn, Alexey; Gulev, Sergey

    2014-05-01

    Cloud fraction is a critical parameter for the accurate estimation of short-wave and long-wave radiation - one of the most important surface fluxes over sea and land. Massive estimates of the total cloud cover as well as cloud amount for different layers of clouds are available from visual observations, satellite measurements and reanalyses. However, these data are subject of different uncertainties and need continuous validation against highly accurate in-situ measurements. Sky imaging with high resolution fish eye camera provides an excellent opportunity for collecting cloud cover data supplemented with additional characteristics hardly available from routine visual observations (e.g. structure of cloud cover under broken cloud conditions, parameters of distribution of cloud dimensions). We present operational automatic observational package which is based on fish eye camera taking sky images with high resolution (up to 1Hz) in time and a spatial resolution of 968x648px. This spatial resolution has been justified as an optimal by several sensitivity experiments. For the use of the package at research vessel when the horizontal positioning becomes critical, a special extension of the hardware and software to the package has been developed. These modules provide the explicit detection of the optimal moment for shooting. For the post processing of sky images we developed a software realizing the algorithm of the filtering of sunburn effect in case of small and moderate could cover and broken cloud conditions. The same algorithm accurately quantifies the cloud fraction by analyzing color mixture for each point and introducing the so-called "grayness rate index" for every pixel. The accuracy of the algorithm has been tested using the data collected during several campaigns in 2005-2011 in the North Atlantic Ocean. The collection of images included more than 3000 images for different cloud conditions supplied with observations of standard parameters. The system is

  20. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  1. Additional Evidence for the Accuracy of Biographical Data: Long-Term Retest and Observer Ratings.

    ERIC Educational Resources Information Center

    Shaffer, Garnett Stokes; And Others

    1986-01-01

    Investigated accuracy of responses to biodata questionnaire using a test-retest design and informed external observers for verification. Responses from 237 subjects and 200 observers provided evidence that many responses to biodata questionnaire were accurate. Assessed sources of inaccuracy, including social desirability effects, and noted…

  2. Mars Express Bistatic Radar Observations 2016

    NASA Astrophysics Data System (ADS)

    Andert, Tom; Simpson, Richard A.; Pätzold, Martin; Kahan, Daniel S.; Remus, Stefan; Oudrhiri, Kamal

    2017-04-01

    One objective of the Mars Express Radio Science Experiment (MaRS) is to address the dielectric properties and surface roughness of Mars, which can be determined by means of a surface scattering experiment, also known as bistatic radar (BSR). The radio subsystem transmitter located on board the Mars Express spacecraft beams right circularly polarized (RCP) radio signals at two wavelengths - 3.6 cm (X-Band) and 13 cm (S-Band) - toward Mars' surface. Part of the impinging radiation is then scattered toward a receiver at a ground station on Earth and both the right and left circularly polarized echo components (RCP and LCP, respectively) are recorded. The dielectric constant can be derived in this configuration from the RCP-to-LCP power ratio. This approach eliminates the need for absolute end-to-end calibration in favor of relative calibration of the RCP and LCP ground receiver channels. Nonetheless, accurate relative calibration of the two receiving channels remains challenging. The most favorable configuration for bistatic radar experiments is around Earth-Mars opposition, which occurs approximately every two years. In 2016 the minimum distance of about 0.5 AU was reached on May 30th; eleven BSR experiments were successfully conducted between the end of April and mid-June. The specular point tracks during two experiments over the Syrtis Major region were very similar on April 27th and June 2nd, and the data were collected using the same Earth-based antenna. The separation in time and the different observing angles provide an opportunity to check reproducibility of the calibrations and analysis methods. The paper will illustrate the general spacecraft-to-ground BSR observation technique and describe in detail the calibration procedures at the ground station needed to perform the relative calibration of the two receiving channels. Results from the calibrations and the surface observations will be shown for the two MaRS experiments over Syrtis Major.

  3. Solution of the surface Euler equations for accurate three-dimensional boundary-layer analysis of aerodynamic configurations

    NASA Technical Reports Server (NTRS)

    Iyer, V.; Harris, J. E.

    1987-01-01

    The three-dimensional boundary-layer equations in the limit as the normal coordinate tends to infinity are called the surface Euler equations. The present paper describes an accurate method for generating edge conditions for three-dimensional boundary-layer codes using these equations. The inviscid pressure distribution is first interpolated to the boundary-layer grid. The surface Euler equations are then solved with this pressure field and a prescribed set of initial and boundary conditions to yield the velocities along the two surface coordinate directions. Results for typical wing and fuselage geometries are presented. The smoothness and accuracy of the edge conditions obtained are found to be superior to the conventional interpolation procedures.

  4. Retrieving the Molecular Composition of Planet-Forming Material: An Accurate Non-LTE Radiative Transfer Code for JWST

    NASA Astrophysics Data System (ADS)

    Pontoppidan, Klaus

    Based on the observed distributions of exoplanets and dynamical models of their evolution, the primary planet-forming regions of protoplanetary disks are thought to span distances of 1-20 AU from typical stars. A key observational challenge of the next decade will be to understand the links between the formation of planets in protoplanetary disks and the chemical composition of exoplanets. Potentially habitable planets in particular are likely formed by solids growing within radii of a few AU, augmented by unknown contributions from volatiles formed at larger radii of 10-50 AU. The basic chemical composition of these inner disk regions is characterized by near- to far-infrared (2-200 micron) emission lines from molecular gas at temperatures of 50-1500 K. A critical step toward measuring the chemical composition of planet-forming regions is therefore to convert observed infrared molecular line fluxes, profiles and images to gas temperatures, densities and molecular abundances. However, current techniques typically employ approximate radiative transfer methods and assumptions of local thermodynamic equilibrium (LTE) to retrieve abundances, leading to uncertainties of orders of magnitude and inconclusive comparisons to chemical models. Ultimately, the scientific impact of the high quality spectroscopic data expected from the James Webb Space Telescope (JWST) will be limited by the availability of radiative transfer tools for infrared molecular lines. We propose to develop a numerically accurate, non-LTE 3D line radiative transfer code, needed to interpret mid-infrared molecular line observations of protoplanetary and debris disks in preparation for the James Webb Space Telescope (JWST). This will be accomplished by adding critical functionality to the existing Monte Carlo code LIME, which was originally developed to support (sub)millimeter interferometric observations. In contrast to existing infrared codes, LIME calculates the exact statistical balance of arbitrary

  5. Filter accuracy for the Lorenz 96 model: Fixed versus adaptive observation operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuart, Andrew M.; Shukla, Abhishek; Sanz-Alonso, Daniel

    In the context of filtering chaotic dynamical systems it is well-known that partial observations, if sufficiently informative, can be used to control the inherent uncertainty due to chaos. The purpose of this paper is to investigate, both theoretically and numerically, conditions on the observations of chaotic systems under which they can be accurately filtered. In particular, we highlight the advantage of adaptive observation operators over fixed ones. The Lorenz ’96 model is used to exemplify our findings. Here, we consider discrete-time and continuous-time observations in our theoretical developments. We prove that, for fixed observation operator, the 3DVAR filter can recovermore » the system state within a neighbourhood determined by the size of the observational noise. It is required that a sufficiently large proportion of the state vector is observed, and an explicit form for such sufficient fixed observation operator is given. Numerical experiments, where the data is incorporated by use of the 3DVAR and extended Kalman filters, suggest that less informative fixed operators than given by our theory can still lead to accurate signal reconstruction. Adaptive observation operators are then studied numerically; we show that, for carefully chosen adaptive observation operators, the proportion of the state vector that needs to be observed is drastically smaller than with a fixed observation operator. Indeed, we show that the number of state coordinates that need to be observed may even be significantly smaller than the total number of positive Lyapunov exponents of the underlying system.« less

  6. Filter accuracy for the Lorenz 96 model: Fixed versus adaptive observation operators

    DOE PAGES

    Stuart, Andrew M.; Shukla, Abhishek; Sanz-Alonso, Daniel; ...

    2016-02-23

    In the context of filtering chaotic dynamical systems it is well-known that partial observations, if sufficiently informative, can be used to control the inherent uncertainty due to chaos. The purpose of this paper is to investigate, both theoretically and numerically, conditions on the observations of chaotic systems under which they can be accurately filtered. In particular, we highlight the advantage of adaptive observation operators over fixed ones. The Lorenz ’96 model is used to exemplify our findings. Here, we consider discrete-time and continuous-time observations in our theoretical developments. We prove that, for fixed observation operator, the 3DVAR filter can recovermore » the system state within a neighbourhood determined by the size of the observational noise. It is required that a sufficiently large proportion of the state vector is observed, and an explicit form for such sufficient fixed observation operator is given. Numerical experiments, where the data is incorporated by use of the 3DVAR and extended Kalman filters, suggest that less informative fixed operators than given by our theory can still lead to accurate signal reconstruction. Adaptive observation operators are then studied numerically; we show that, for carefully chosen adaptive observation operators, the proportion of the state vector that needs to be observed is drastically smaller than with a fixed observation operator. Indeed, we show that the number of state coordinates that need to be observed may even be significantly smaller than the total number of positive Lyapunov exponents of the underlying system.« less

  7. Accurate line intensities of methane from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Nikitin, Andrei V.; Rey, Michael; Tyuterev, Vladimir G.

    2017-10-01

    In this work, we report first-principle theoretical predictions of methane spectral line intensities that are competitive with (and complementary to) the best laboratory measurements. A detailed comparison with the most accurate data shows that discrepancies in integrated polyad intensities are in the range of 0.4%-2.3%. This corresponds to estimations of the best available accuracy in laboratory Fourier Transform spectra measurements for this quantity. For relatively isolated strong lines the individual intensity deviations are in the same range. A comparison with the most precise laser measurements of the multiplet intensities in the 2ν3 band gives an agreement within the experimental error margins (about 1%). This is achieved for the first time for five-atomic molecules. In the Supplementary Material we provide the lists of theoretical intensities at 269 K for over 5000 strongest transitions in the range below 6166 cm-1. The advantage of the described method is that this offers a possibility to generate fully assigned exhaustive line lists at various temperature conditions. Extensive calculations up to 12,000 cm-1 including high-T predictions will be made freely available through the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru) that contains ab initio born line lists and provides a user-friendly graphical interface for a fast simulation of the absorption cross-sections and radiance.

  8. Recent meteor observing activities in Japan

    NASA Astrophysics Data System (ADS)

    Yamamoto, M.

    2005-02-01

    The meteor train observation (METRO) campaign is described as an example of recent meteor observing activity in Japan. Other topics of meteor observing activities in Japan, including Ham-band radio meteor observation, the ``Japan Fireball Network'', the automatic video-capture software ``UFOCapture'', and the Astro-classroom programme are also briefly introduced.

  9. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  10. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  11. Can cancer researchers accurately judge whether preclinical reports will reproduce?

    PubMed Central

    Mandel, David R.; Kimmelman, Jonathan

    2017-01-01

    There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB) to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported). Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies. PMID:28662052

  12. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  13. Physically Accurate Soil Freeze-Thaw Processes in a Global Land Surface Scheme

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Haverd, Vanessa

    2018-01-01

    The model Soil-Litter-Iso (SLI) calculates coupled heat and water transport in soil. It was recently implemented into the Australian land surface model CABLE, which is the land component of the Australian Community Climate and Earth System Simulator (ACCESS). Here we extended SLI to include accurate freeze-thaw processes in the soil and snow. SLI provides thence an implicit solution of the energy and water balances of soil and snow as a standalone model and within CABLE. The enhanced SLI was tested extensively against theoretical formulations, laboratory experiments, field data, and satellite retrievals. The model performed well for all experiments at wide-ranging temporal and spatial scales. SLI melts snow faster at the end of the cold season compared to observations though because there is no subgrid variability within SLI given by the implicit, coupled solution of energy and water. Combined CABLE-SLI shows very realistic dynamics and extent of permafrost on the Northern hemisphere. It illustrated, however, also the limits of possible comparisons between large-scale land surface models and local permafrost observations. CABLE-SLI exhibits the same patterns of snow depth and snow water equivalent on the Northern hemisphere compared to satellite-derived observations but quantitative comparisons depend largely on the given meteorological input fields. Further extension of CABLE-SLI with depth-dependence of soil carbon will allow realistic projections of the development of permafrost and frozen carbon stocks in a changing climate.

  14. JWST NIRCam Time Series Observations

    NASA Technical Reports Server (NTRS)

    Greene, Tom; Schlawin, E.

    2017-01-01

    We explain how to make time-series observations with the Near-Infrared camera (NIRCam) science instrument of the James Webb Space Telescope. Both photometric and spectroscopic observations are described. We present the basic capabilities and performance of NIRCam and show examples of how to set its observing parameters using the Space Telescope Science Institute's Astronomer's Proposal Tool (APT).

  15. Do state-of-the-art CMIP5 ESMs accurately represent observed vegetation-rainfall feedbacks? Focus on the Sahel

    NASA Astrophysics Data System (ADS)

    Notaro, M.; Wang, F.; Yu, Y.; Mao, J.; Shi, X.; Wei, Y.

    2017-12-01

    The semi-arid Sahel ecoregion is an established hotspot of land-atmosphere coupling. Ocean-land-atmosphere interactions received considerable attention by modeling studies in response to the devastating 1970s-90s Sahel drought, which models suggest was driven by sea-surface temperature (SST) anomalies and amplified by local vegetation-atmosphere feedbacks. Vegetation affects the atmosphere through biophysical feedbacks by altering the albedo, roughness, and transpiration and thereby modifying exchanges of energy, momentum, and moisture with the atmosphere. The current understanding of these potentially competing processes is primarily based on modeling studies, with biophysical feedbacks serving as a key uncertainty source in regional climate change projections among Earth System Models (ESMs). In order to reduce this uncertainty, it is critical to rigorously evaluate the representation of vegetation feedbacks in ESMs against an observational benchmark in order to diagnose systematic biases and their sources. However, it is challenging to successfully isolate vegetation's feedbacks on the atmosphere, since the atmospheric control on vegetation growth dominates the atmospheric feedback response to vegetation anomalies and the atmosphere is simultaneously influenced by oceanic and terrestrial anomalies. In response to this challenge, a model-validated multivariate statistical method, Stepwise Generalized Equilibrium Feedback Assessment (SGEFA), is developed, which extracts the forcing of a slowly-evolving environmental variable [e.g. SST or leaf area index (LAI)] on the rapidly-evolving atmosphere. By applying SGEFA to observational and remotely-sensed data, an observational benchmark is established for Sahel vegetation feedbacks. In this work, the simulated responses in key atmospheric variables, including evapotranspiration, albedo, wind speed, vertical motion, temperature, stability, and rainfall, to Sahel LAI anomalies are statistically assessed in Coupled Model

  16. An adaptive tracking observer for failure-detection systems

    NASA Technical Reports Server (NTRS)

    Sidar, M.

    1982-01-01

    The design problem of adaptive observers applied to linear, constant and variable parameters, multi-input, multi-output systems, is considered. It is shown that, in order to keep the observer's (or Kalman filter) false-alarm rate (FAR) under a certain specified value, it is necessary to have an acceptable proper matching between the observer (or KF) model and the system parameters. An adaptive observer algorithm is introduced in order to maintain desired system-observer model matching, despite initial mismatching and/or system parameter variations. Only a properly designed adaptive observer is able to detect abrupt changes in the system (actuator, sensor failures, etc.) with adequate reliability and FAR. Conditions for convergence for the adaptive process were obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors and accurate and fast parameter identification, in both deterministic and stochastic cases.

  17. Inflatable bladder provides accurate calibration of pressure switch

    NASA Technical Reports Server (NTRS)

    Smith, N. J.

    1965-01-01

    Calibration of a pressure switch is accurately checked by a thin-walled circular bladder. It is placed in the pressure switch and applies force to the switch diaphragm when expanded by an external pressure source. The disturbance to the normal operation of the switch is minimal.

  18. A French observational study describing the use of human polyvalent immunoglobulins in hematological malignancy-associated secondary immunodeficiency.

    PubMed

    Benbrahim, Omar; Viallard, Jean-François; Choquet, Sylvain; Royer, Bruno; Bauduer, Frédéric; Decaux, Olivier; Crave, Jean-Charles; Fardini, Yann; Clerson, Pierre; Levy, Vincent

    2018-04-12

    To describe the characteristics of patients suffering from secondary immunodeficiencies (SID) associated with hematological malignancies (HM), who started immunoglobulins replacement therapy (IgRT), physicians' expectations regarding IgRT and IgRT modalities. Non-interventional, prospective French cross-sectional study. The analysis included 231 patients (66±12 years old) suffering from multiple myeloma (MM) (N=64), chronic lymphoid leukemia (CLL) (N=84), aggressive non-Hodgkin B-cell lymphoma (aNHL) (N=32), indolent NHL (N=39), acute leukemia (N=6), Hodgkin disease (N=6). Of the HM, 47% were currently treated, 42% were relapsing or refractory, 23% of patients had received an autologous hematopoietic stem-cell transplant and 5% an allograft. Serum immunoglobulins trough levels in 195 individuals were less than 5g/L in 68.7% of cases. Most patients had a history of recurrent infections. Immunoglobulin dose was about 400 mg/kg/month. Half of patients started with subcutaneous infusion. When starting IgRT, physicians mainly expected to prevent severe and moderate infections. They also anticipated improvement in quality of life and survival which is beyond evidence-based medicine. NHL is a frequent condition motivating IgRT besides well recognized indications. Physicians mainly based the decision of starting IgRT on hypogammaglobulinemia and recurrence of infections but, irrespective of current recommendations, were also prepared to start IgRT prophylactically even in the absence of a history of infections. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  20. Standardizing a simpler, more sensitive and accurate tail bleeding assay in mice

    PubMed Central

    Liu, Yang; Jennings, Nicole L; Dart, Anthony M; Du, Xiao-Jun

    2012-01-01

    AIM: To optimize the experimental protocols for a simple, sensitive and accurate bleeding assay. METHODS: Bleeding assay was performed in mice by tail tip amputation, immersing the tail in saline at 37 °C, continuously monitoring bleeding patterns and measuring bleeding volume from changes in the body weight. Sensitivity and extent of variation of bleeding time and bleeding volume were compared in mice treated with the P2Y receptor inhibitor prasugrel at various doses or in mice deficient of FcRγ, a signaling protein of the glycoprotein VI receptor. RESULTS: We described details of the bleeding assay with the aim of standardizing this commonly used assay. The bleeding assay detailed here was simple to operate and permitted continuous monitoring of bleeding pattern and detection of re-bleeding. We also reported a simple and accurate way of quantifying bleeding volume from changes in the body weight, which correlated well with chemical assay of hemoglobin levels (r2 = 0.990, P < 0.0001). We determined by tail bleeding assay the dose-effect relation of the anti-platelet drug prasugrel from 0.015 to 5 mg/kg. Our results showed that the correlation of bleeding time and volume was unsatisfactory and that compared with the bleeding time, bleeding volume was more sensitive in detecting a partial inhibition of platelet’s haemostatic activity (P < 0.01). Similarly, in mice with genetic disruption of FcRγ as a signaling molecule of P-selectin glycoprotein ligand-1 leading to platelet dysfunction, both increased bleeding volume and repeated bleeding pattern defined the phenotype of the knockout mice better than that of a prolonged bleeding time. CONCLUSION: Determination of bleeding pattern and bleeding volume, in addition to bleeding time, improved the sensitivity and accuracy of this assay, particularly when platelet function is partially inhibited. PMID:24520531