Science.gov

Sample records for accurately describe observables

  1. Parameters Describing Earth Observing Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Zanoni, Vicki; Ryan, Robert E.; Pagnutti, Mary; Davis, Bruce; Markham, Brian; Storey, Jim

    2003-01-01

    The Earth science community needs to generate consistent and standard definitions for spatial, spectral, radiometric, and geometric properties describing passive electro-optical Earth observing sensors and their products. The parameters used to describe sensors and to describe their products are often confused. In some cases, parameters for a sensor and for its products are identical; in other cases, these parameters vary widely. Sensor parameters are bound by the fundamental performance of a system, while product parameters describe what is available to the end user. Products are often resampled, edge sharpened, pan-sharpened, or compressed, and can differ drastically from the intrinsic data acquired by the sensor. Because detailed sensor performance information may not be readily available to an international science community, standardization of product parameters is of primary performance. Spatial product parameters described include Modulation Transfer Function (MTF), point spread function, line spread function, edge response, stray light, edge sharpening, aliasing, ringing, and compression effects. Spectral product parameters discussed include full width half maximum, ripple, slope edge, and out-of-band rejection. Radiometric product properties discussed include relative and absolute radiometry, noise equivalent spectral radiance, noise equivalent temperature diffenence, and signal-to-noise ratio. Geometric product properties discussed include geopositional accuracy expressed as CE90, LE90, and root mean square error. Correlated properties discussed include such parameters as band-to-band registration, which is both a spectral and a spatial property. In addition, the proliferation of staring and pushbroom sensor architectures requires new parameters to describe artifacts that are different from traditional cross-track system artifacts. A better understanding of how various system parameters affect product performance is also needed to better ascertain the

  2. The variance needed to accurately describe jump height from vertical ground reaction force data.

    PubMed

    Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran

    2014-12-01

    In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds. PMID:25010220

  3. A stochastic model of kinetochore–microtubule attachment accurately describes fission yeast chromosome segregation

    PubMed Central

    Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline

    2012-01-01

    In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B–like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B–like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy. PMID:22412019

  4. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    SciTech Connect

    Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  5. Accurate mathematical models to describe the lactation curve of Lacaune dairy sheep under intensive management.

    PubMed

    Elvira, L; Hernandez, F; Cuesta, P; Cano, S; Gonzalez-Martin, J-V; Astiz, S

    2013-06-01

    Although the intensive production system of Lacaune dairy sheep is the only profitable method for producers outside of the French Roquefort area, little is known about this type of systems. This study evaluated yield records of 3677 Lacaune sheep under intensive management between 2005 and 2010 in order to describe the lactation curve of this breed and to investigate the suitability of different mathematical functions for modeling this curve. A total of 7873 complete lactations during a 40-week lactation period corresponding to 201 281 pieces of weekly yield data were used. First, five mathematical functions were evaluated on the basis of the residual mean square, determination coefficient, Durbin Watson and Runs Test values. The two better models were found to be Pollott Additive and fractional polynomial (FP). In the second part of the study, the milk yield, peak of milk yield, day of peak and persistency of the lactations were calculated with Pollot Additive and FP models and compared with the real data. The results indicate that both models gave an extremely accurate fit to Lacaune lactation curves in order to predict milk yields (P = 0.871), with the FP model being the best choice to provide a good fit to an extensive amount of real data and applicable on farm without specific statistical software. On the other hand, the interpretation of the parameters of the Pollott Additive function helps to understand the biology of the udder of the Lacaune sheep. The characteristics of the Lacaune lactation curve and milk yield are affected by lactation number and length. The lactation curves obtained in the present study allow the early identification of ewes with low milk yield potential, which will help to optimize farm profitability. PMID:23257242

  6. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    PubMed Central

    Madebene, Bruno; Ulusoy, Inga; Mancera, Luis; Scribano, Yohann; Chulkov, Sergey

    2011-01-01

    Summary We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters. PMID:22003450

  7. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    SciTech Connect

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.

  8. Describing Comprehension: Teachers' Observations of Students' Reading Comprehension

    ERIC Educational Resources Information Center

    Vander Does, Susan Lubow

    2012-01-01

    Teachers' observations of student performance in reading are abundant and insightful but often remain internal and unarticulated. As a result, such observations are an underutilized and undervalued source of data. Given the gaps in knowledge about students' reading comprehension that exist in formal assessments, the frequent calls for teachers'…

  9. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    SciTech Connect

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss

  10. Can the Dupuit-Thiem equation accurately describe the flow pattern induced by injection in a laboratory scale aquifer-well system?

    NASA Astrophysics Data System (ADS)

    Bonilla, Jose; Kalwa, Fritz; Händel, Falk; Binder, Martin; Stefan, Catalin

    2016-04-01

    The Dupuit-Thiem equation is normally used to assess flow towards a pumping well in unconfined aquifers under steady-state conditions. For the formulation of the equation it is assumed that flow is laminar, radial and horizontal towards the well. It is well known that these assumptions are not met in the vicinity of the well; some authors restrict the application of the equation only to a radius larger than 1.5-fold the aquifer thickness. In this study, the equation accuracy to predict the pressure head is evaluated as a simple and quick analytical method to describe the flow pattern for different injection rates in the LSAW. A laboratory scale aquifer-well system (LSAW) was implemented to study the aquifer recharge through wells. The LSAW consists of a 1.0 m-diameter tank with a height of 1.1 meters, filled with sand and a screened well in the center with a diameter of 0.025 m. A regulated outflow system establishes a controlled water level at the tank wall to simulate various aquifer thicknesses. The pressure head at the bottom of the tank along one axis can be measured to assess the flow profile every 0.1 m between the well and the tank wall. In order to evaluate the accuracy of the Dupuit-Thiem equation, a combination of different injection rates and aquifer thicknesses were simulated in the LSAW. Contrary to what was expected (significant differences between the measured and calculated pressure heads in the well), the absolute difference between the calculated and measured pressure head is less than 10%. Beside this, the highest differences are not observed in the well itself, but in the near proximity of it, at a radius of 0.1 m. The results further show that the difference between the calculated and measured pressure heads tends to decrease with higher flow rates. Despite its limitations (assumption of laminar and horizontal flow throughout the whole aquifer), the Dupuit-Thiem equation is considered to accurately represent the flow system in the LSAW.

  11. A geometric sequence that accurately describes allowed multiple conductance levels of ion channels: the "three-halves (3/2) rule".

    PubMed Central

    Pollard, J R; Arispe, N; Rojas, E; Pollard, H B

    1994-01-01

    Ion channels can express multiple conductance levels that are not integer multiples of some unitary conductance, and that interconvert among one another. We report here that for 26 different types of multiple conductance channels, all allowed conductance levels can be calculated accurately using the geometric sequence gn = g(o) (3/2)n, where gn is a conductance level and n is an integer > or = 0. We refer to this relationship as the "3/2 Rule," because the value of any term in the sequence of conductances (gn) can be calculated as 3/2 times the value of the preceding term (gn-1). The experimentally determined average value for "3/2" is 1.491 +/- 0.095 (sample size = 37, average +/- SD). We also verify the choice of a 3/2 ratio on the basis of error analysis over the range of ratio values between 1.1 and 2.0. In an independent analysis using Marquardt's algorithm, we further verified the 3/2 ratio and the assignment of specific conductances to specific terms in the geometric sequence. Thus, irrespective of the open time probability, the allowed conductance levels of these channels can be described accurately to within approximately 6%. We anticipate that the "3/2 Rule" will simplify description of multiple conductance channels in a wide variety of biological systems and provide an organizing principle for channel heterogeneity and differential effects of channel blockers. PMID:7524712

  12. BUFR2NetCDF - Converting Observational Data to a Self Describing Archive Format

    NASA Astrophysics Data System (ADS)

    Manross, K.; Caron, J. L.

    2013-12-01

    The majority of observational data collected and distributed by the World Meteorological Organization (WMO) Global Telecommunication System (GTS) are done so via the BUFR data format. There are many good reasons for this, such as the ability to store nearly any observational data type, flexibility for missing/unused parameters, and file compressibility, in other words BUFR is a very good transport container. BUFR data are a table driven data format, meaning that a separate table is maintained for the encoding/decoding of the data stored within. The WMO, as well as many other operational data centers such as the National Oceanic and Atmospheric Administration's (NOAA) National Center for Environmental Prediction (NCEP), maintain the metadata tables for storing and retrieving data within a BUFR file. Often the table data is not embedded with the BUFR files (though NCEP does embed the tables) and it can be challenging for the user to extract the table metadata, or locate the proper version of the table to obtain the needed metadata for a BUR file. More generally, non-expert users find BUFR a difficult format to parse, and to use and understand correctly. This presentation introduces a tool for converting BUFR data files to the self-describing NetCDF format. Of note is that the resulting NetCDF file will incorporate the new Discrete Sampling Geometries of the Climate and Forecast (CF) metadata convention. This will provide users of archived observational data: greater ease of use, assurance of data and metadata integrity for their research, and improved provenance.

  13. Provenance of things - describing geochemistry observation workflows using PROV-O

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.; Car, N. J.

    2015-12-01

    Geochemistry observations typically follow a complex preparation process after sample retrieval from the field. Description of these are required to allow readers and other data users to assess the reliability of the data produced, and to ensure reproducibility. While laboratory notebooks are used for private record-keeping, and laboratory information systems (LIMS) on a facility basis, this data is not generally published, and there are no standard formats for transfer. And while there is some standardization of workflows, this is often scoped to a lab, or an instrument. New procedures and workflows are being developed continually - in fact this is a key expectation in the development of the science. Thus formalization of the description of sample preparation and observations must be both rigorous and flexible. We have been exploring the use of the W3C Provenance model (PROV) to capture complete traces, including both the real world things and the data generated. PROV has a core data model that distinguishes between entities, agents and activities involved in producing a piece of data or thing in the world. While the design of PROV was primarily conditioned by stories concerning information resources, application is not restricted to the production of digital or information assets. PROV allowing a comprehensive trace of predecessor entities and transformations at any level of detail. In this paper we demonstrate the use of PROV for describing specimens managed for scientific observations. Two examples are considered: a geological sample which undergoes a typical preparation process for measurements of the concentration of a particular chemical substance, and the collection, taxonomic classification and eventual publication of an insect specimen. PROV enables the material that goes into the instrument to be linked back to the sample retrieved in the field. This complements the IGSN system, which focuses on registration of field sample identity to support the

  14. The significance of some observations on African ocular onchocerciasis described by Jean Hissette (1888-1965).

    PubMed

    Kluxen, G; Hoerauf, A

    2008-01-01

    One of the most significant contributions to tropical medicine and ophthalmology was made by Jean Hissette: African ocular onchocerciasis. During his extensive investigations in the Babindi country, he found numerous adults with river blindness. Their eye disease was caused by the filaria Onchocerca volvulus Leuckart. He noticed the signs of interstitial keratitis and band keratopathy, faint iritis or iridocyclitis, posterior synechiae and often a downward distortion of the pupil. He was the first to describe chorioretinal scarring of the fundus, what became known as the Hissette-Ridley fundus. People reported to him their entoptic phenomena which he unequivocally interpreted to be the images of microfilariae in the patient's own eye. During his stay in Belgium in 1932, he elucidated the pathogenesis of blindness since he was able to provide histological proof of the presence of microfilariae in various ocular tissues of an enucleated eye from a patient living near the Sankuru river. Like other serious health impairments, the severe inflammatory lesions in the eye occurred only after the microfilariae had died. Hence he realized that dying microfilariae play a key role in the mechanisms leading to blindness. Hissette's precise descriptions were the logical fruit of his outstanding observational abilities and enabled him as a man of great intuition to speculate about causal relationships. He evidently benefited from the fact that he took the native Africans seriously and asked them their opinion. In 1933, his friend and teacher Dr. De Mets in Antwerp already wrote on Hissette's discovery in the Belgian Congo: "This study is of exceptional value to specialists which is not only a tribute to its author, but to our common native country (Belgium)." PMID:18546927

  15. Incremental area under response curve more accurately describes the triglyceride response to an oral fat load in both healthy and type 2 diabetic subjects.

    PubMed

    Carstensen, Marius; Thomsen, Claus; Hermansen, Kjeld

    2003-08-01

    Elevation of postprandial triacylglycerol (TG)-rich plasma lipoproteins is considered potentially atherogenic. Type 2 diabetic patients have exaggerated postprandial TG compared with healthy subjects. Postprandial TG responses to oral fat loads are usually studied as the area under the TG curve. No consensus exists regarding the method of choice when calculating the TG response area. We evaluated the correlation between fasting TG and postprandial TG responses calculated by the trapezoid rule as total area under the curve (AUC) and incremental area under the curve (iAUC). Furthermore, we compared the AUC and iAUC to a 3-point calculation method. Ten healthy subjects and 47 type 2 diabetic patients ingested test meals consisting of an energy-free soup plus 80 g fat and 50 g carbohydrate. TG responses were measured in total plasma, in a chylomicron (CM)-rich fraction and in a CM-poor fraction. In healthy subjects the AUC, but not iAUC, correlated positively to fasting TG. In type 2 diabetic patients a strong correlation was found between fasting TG and AUC, whereas weak associations were found to the iAUCs. The iAUC was strongly correlated to the postprandial TG rise in both groups. The 3-point areas differed significantly from the trapezoid measurements in both healthy and type 2 diabetic subjects. In conclusion, in both healthy and type 2 diabetic subjects total AUC is highly correlated to fasting TG, whereas iAUC more accurately describes the TG response to an oral fat load. The 3-point test seems less suitable for the determination of postprandial response in both healthy and type 2 diabetic subjects. PMID:12898469

  16. Describing Profiles of Instructional Practice: A New Approach to Analyzing Classroom Observation Data

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Kieffer, Michael J.

    2015-01-01

    The authors outline the application of latent class analysis (LCA) to classroom observational instruments. LCA offers diagnostic information about teachers' instructional strengths and weaknesses, along with estimates of measurement error for individual teachers, while remaining relatively straightforward to implement and interpret. It is…

  17. Describing the observed cosmic neutrinos by interactions of nuclei with matter

    NASA Astrophysics Data System (ADS)

    Winter, Walter

    2014-11-01

    IceCube has observed neutrinos that are presumably of extra-Galactic origin. Since specific sources have not yet been identified, we discuss what could be learned from the conceptual point of view. We use a simple model for neutrino production from the interactions between nuclei and matter, and we focus on the description of the spectral shape and flavor composition observed by IceCube. Our main parameters are the spectral index, maximal energy, magnetic field, and composition of the accelerated nuclei. We show that a cutoff at PeV energies can be achieved by soft enough spectra, a cutoff of the primary energy, or strong enough magnetic fields. These options, however, are difficult to reconcile with the hypothesis that these neutrinos originate from the same sources as the ultrahigh-energy cosmic rays. We demonstrate that heavier nuclei accelerated in the sources may be a possible way out if the maximal energy scales appropriately with the mass number of the nuclei. In this scenario, neutrino observations can actually be used to test the ultrahigh-energy cosmic ray acceleration mechanism. We also emphasize the need for a volume upgrade of the IceCube detector for future precision physics, for which the flavor information becomes a statistically meaningful model discriminator as well as a qualitatively new ingredient.

  18. Applying an accurate spherical model to gamma-ray burst afterglow observations

    NASA Astrophysics Data System (ADS)

    Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.

    2013-05-01

    We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.

  19. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    SciTech Connect

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-10-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  20. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  1. Simultaneous auroral observations described in the historical records of China, Japan and Korea from ancient times to AD 1700

    NASA Astrophysics Data System (ADS)

    Willis, D. M.; Stephenson, F. R.

    2000-01-01

    Early auroral observations recorded in various oriental histories are examined in order to search for examples of strictly simultaneous and indisputably independent observations of the aurora borealis from spatially separated sites in East Asia. In the period up to ad 1700, only five examples have been found of two or more oriental auroral observations from separate sites on the same night. These occurred during the nights of ad 1101 January 31, ad 1138 October 6, ad 1363 July 30, ad 1582 March 8 and ad 1653 March 2. The independent historical evidence describing observations of mid-latitude auroral displays at more than one site in East Asia on the same night provides virtually incontrovertible proof that auroral displays actually occurred on these five special occasions. This conclusion is corroborated by the good level of agreement between the detailed auroral descriptions recorded in the different oriental histories, which furnish essentially compatible information on both the colour (or colours) of each auroral display and its approximate position in the sky. In addition, the occurrence of auroral displays in Europe within two days of auroral displays in East Asia, on two (possibly three) out of these five special occasions, suggests that a substantial number of the mid-latitude auroral displays recorded in the oriental histories are associated with intense geomagnetic storms.

  2. How many of the observed neutrino events can be described by cosmic ray interactions in the Milky Way?

    NASA Astrophysics Data System (ADS)

    Joshi, Jagdish C.; Winter, Walter; Gupta, Nayantara

    2014-04-01

    Cosmic rays diffuse through the interstellar medium and interact with matter and radiations as long as they are trapped in the Galactic magnetic field. The IceCube experiment has detected some TeV-PeV neutrino events whose origin is yet unknown. We study if all or a fraction of these events can be described by the interactions of cosmic rays with matter. We consider the average target density needed to explain them for different halo sizes and shapes, the effect of the chemical composition of the cosmic rays, the impact of the directional information of the neutrino events, and the constraints from gamma-ray bounds and their direction. We do not require knowledge of the cosmic ray escape time or injection for our approach. We find that, given all constraints, at most 0.1 of the observed neutrino events in IceCube can be described by cosmic ray interactions with matter. In addition, we demonstrate that the currently established chemical composition of the cosmic rays contradicts a peak of the neutrino spectrum at PeV energies.

  3. Can All Cosmological Observations Be Accurately Interpreted with a Unique Geometry?

    NASA Astrophysics Data System (ADS)

    Fleury, Pierre; Dupuy, Hélène; Uzan, Jean-Philippe

    2013-08-01

    The recent analysis of the Planck results reveals a tension between the best fits for (Ωm0, H0) derived from the cosmic microwave background or baryonic acoustic oscillations on the one hand, and the Hubble diagram on the other hand. These observations probe the Universe on very different scales since they involve light beams of very different angular sizes; hence, the tension between them may indicate that they should not be interpreted the same way. More precisely, this Letter questions the accuracy of using only the (perturbed) Friedmann-Lemaître geometry to interpret all the cosmological observations, regardless of their angular or spatial resolution. We show that using an inhomogeneous “Swiss-cheese” model to interpret the Hubble diagram allows us to reconcile the inferred value of Ωm0 with the Planck results. Such an approach does not require us to invoke new physics nor to violate the Copernican principle.

  4. Can all cosmological observations be accurately interpreted with a unique geometry?

    PubMed

    Fleury, Pierre; Dupuy, Hélène; Uzan, Jean-Philippe

    2013-08-30

    The recent analysis of the Planck results reveals a tension between the best fits for (Ω(m0), H(0)) derived from the cosmic microwave background or baryonic acoustic oscillations on the one hand, and the Hubble diagram on the other hand. These observations probe the Universe on very different scales since they involve light beams of very different angular sizes; hence, the tension between them may indicate that they should not be interpreted the same way. More precisely, this Letter questions the accuracy of using only the (perturbed) Friedmann-Lemaître geometry to interpret all the cosmological observations, regardless of their angular or spatial resolution. We show that using an inhomogeneous "Swiss-cheese" model to interpret the Hubble diagram allows us to reconcile the inferred value of Ω(m0) with the Planck results. Such an approach does not require us to invoke new physics nor to violate the Copernican principle. PMID:24033020

  5. Accurate stellar masses for SB2 components: Interferometric observations for Gaia validation

    NASA Astrophysics Data System (ADS)

    Halbwachs, J.-L.; Boffin, H. M. J.; Le Bouquin, J.-B.; Famaey, B.; Salomon, J.-B.; Arenou, F.; Pourbaix, D.; Anthonioz, F.; Grellmann, R.; Guieu, S.; Guillout, P.; Jorissen, A.; Kiefer, F.; Lebreton, Y.; Mazeh, T.; Nebot Gómez-Morán, A.; Sana, H.; Tal-Or, L.

    2015-12-01

    A sample of about 70 double-lined spectroscopic binaries (SB2) is followed with radial velocity (RV) measurements, in order to derive the masses of their components when the astrometric measurements of Gaia will be available. A subset of 6 SB2 was observed in interferometry with VLTI/PIONIER, and the components were separated for each binary. The RV measurements already obtained were combined with the interferometric observations and the masses of the components were derived. The accuracies of the 12 masses are presently between 0.4 and 7 %, but they will still be improved in the future. These masses will be used to validate the masses which will be obtained from Gaia. In addition, the parallaxes derived from the combined visual+spectroscopic orbits are compared to that of Hipparcos, and a mass-luminosity relation is derived in the infrared H band.

  6. Extracting Accurate and Precise Topography from Lroc Narrow Angle Camera Stereo Observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Speyerer, E. J.; Robinson, M. S.; LROC Team

    2016-06-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that acquire meter scale imaging. Stereo observations are acquired by imaging from two or more orbits, including at least one off-nadir slew. Digital terrain models (DTMs) generated from the stereo observations are controlled to Lunar Orbiter Laser Altimeter (LOLA) elevation profiles. With current processing methods, digital terrain models (DTM) have absolute accuracies commensurate than the uncertainties of the LOLA profiles (~10 m horizontally and ~1 m vertically) and relative horizontal and vertical precisions better than the pixel scale of the DTMs (2 to 5 m). The NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics, enabling detailed characterization of large geomorphic features and providing a key resource for future exploration planning. Currently, two percent of the lunar surface is imaged in NAC stereo and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur on all the terrestrial planets.

  7. OBSERVING SIMULATED PROTOSTARS WITH OUTFLOWS: HOW ACCURATE ARE PROTOSTELLAR PROPERTIES INFERRED FROM SEDs?

    SciTech Connect

    Offner, Stella S. R.; Robitaille, Thomas P.; Hansen, Charles E.; Klein, Richard I.; McKee, Christopher F.

    2012-07-10

    The properties of unresolved protostars and their local environment are frequently inferred from spectral energy distributions (SEDs) using radiative transfer modeling. In this paper, we use synthetic observations of realistic star formation simulations to evaluate the accuracy of properties inferred from fitting model SEDs to observations. We use ORION, an adaptive mesh refinement (AMR) three-dimensional gravito-radiation-hydrodynamics code, to simulate low-mass star formation in a turbulent molecular cloud including the effects of protostellar outflows. To obtain the dust temperature distribution and SEDs of the forming protostars, we post-process the simulations using HYPERION, a state-of-the-art Monte Carlo radiative transfer code. We find that the ORION and HYPERION dust temperatures typically agree within a factor of two. We compare synthetic SEDs of embedded protostars for a range of evolutionary times, simulation resolutions, aperture sizes, and viewing angles. We demonstrate that complex, asymmetric gas morphology leads to a variety of classifications for individual objects as a function of viewing angle. We derive best-fit source parameters for each SED through comparison with a pre-computed grid of radiative transfer models. While the SED models correctly identify the evolutionary stage of the synthetic sources as embedded protostars, we show that the disk and stellar parameters can be very discrepant from the simulated values, which is expected since the disk and central source are obscured by the protostellar envelope. Parameters such as the stellar accretion rate, stellar mass, and disk mass show better agreement, but can still deviate significantly, and the agreement may in some cases be artificially good due to the limited range of parameters in the set of model SEDs. Lack of correlation between the model and simulation properties in many individual instances cautions against overinterpreting properties inferred from SEDs for unresolved protostellar

  8. Observation-driven adaptive differential evolution and its application to accurate and smooth bronchoscope three-dimensional motion tracking.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian; Mori, Kensaku

    2015-08-01

    This paper proposes an observation-driven adaptive differential evolution algorithm that fuses bronchoscopic video sequences, electromagnetic sensor measurements, and computed tomography images for accurate and smooth bronchoscope three-dimensional motion tracking. Currently an electromagnetic tracker with a position sensor fixed at the bronchoscope tip is commonly used to estimate bronchoscope movements. The large tracking error from directly using sensor measurements, which may be deteriorated heavily by patient respiratory motion and the magnetic field distortion of the tracker, limits clinical applications. How to effectively use sensor measurements for precise and stable bronchoscope electromagnetic tracking remains challenging. We here exploit an observation-driven adaptive differential evolution framework to address such a challenge and boost the tracking accuracy and smoothness. In our framework, two advantageous points are distinguished from other adaptive differential evolution methods: (1) the current observation including sensor measurements and bronchoscopic video images is used in the mutation equation and the fitness computation, respectively and (2) the mutation factor and the crossover rate are determined adaptively on the basis of the current image observation. The experimental results demonstrate that our framework provides much more accurate and smooth bronchoscope tracking than the state-of-the-art methods. Our approach reduces the tracking error from 3.96 to 2.89 mm, improves the tracking smoothness from 4.08 to 1.62 mm, and increases the visual quality from 0.707 to 0.741. PMID:25660001

  9. Describing the Sequence of Cognitive Decline in Alzheimer’s Disease Patients: Results from an Observational Study

    PubMed Central

    Henneges, Carsten; Reed, Catherine; Chen, Yun-Fei; Dell’Agnello, Grazia; Lebrec, Jeremie

    2016-01-01

    Background: Improved understanding of the pattern of cognitive decline in Alzheimer’s disease (AD) would be useful to assist primary care physicians in explaining AD progression to patients and caregivers. Objective: To identify the sequence in which cognitive abilities decline in community-dwelling patients with AD. Methods: Baseline data were analyzed from 1,495 patients diagnosed with probable AD and a Mini-Mental State Examination (MMSE) score ≤ 26 enrolled in the 18-month observational GERAS study. Proportional odds logistic regression models were applied to model MMSE subscores (orientation, registration, attention and concentration, recall, language, and drawing) and the corresponding subscores of the cognitive subscale of the Alzheimer’s Disease Assessment Scale (ADAS-cog), using MMSE total score as the index of disease progression. Probabilities of impairment start and full impairment were estimated at each MMSE total score level. Results: From the estimated probabilities for each MMSE subscore as a function of the MMSE total score, the first aspect of cognition to start being impaired was recall, followed by orientation in time, attention and concentration, orientation in place, language, drawing, and registration. For full impairment in subscores, the sequence was recall, drawing, attention and concentration, orientation in time, orientation in place, registration, and language. The sequence of cognitive decline for the corresponding ADAS-cog subscores was remarkably consistent with this pattern. Conclusion: The sequence of cognitive decline in AD can be visualized in an animation using probability estimates for key aspects of cognition. This might be useful for clinicians to set expectations on disease progression for patients and caregivers. PMID:27079700

  10. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  11. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  12. Towards a standard framework to describe behaviours in the common-sloth (Bradypus variegatus Schinz, 1825): novel interactions data observed in distinct fragments of the Atlantic forest, Brazil.

    PubMed

    Silva, S M; Clozato, C L; Moraes-Barros, N; Morgante, J S

    2013-08-01

    The common three-toed sloth is a widespread species, but the location and the observation of its individuals are greatly hindered by its biological features. Their camouflaged pelage, its slow and quiet movements, and the strictly arboreal habits resulted in the publication of sparse, fragmented and not patterned information on the common sloth behaviour. Thus, herein we propose an updated standardized behavioural categories' framework to the study of the species. Furthermore we describe two never reported interaction behaviours: a probable mating / courtship ritual between male and female; and apparent recognition behaviour between two males. Finally we highlight the contribution of small-duration field works in this elusive species ethological study. PMID:24212693

  13. Importance of Accurate Liquid Water Path for Estimation of Solar Radiation in Warm Boundary Layer Clouds: An Observational Study

    SciTech Connect

    Sengupta, Manajit; Clothiaux, Eugene E.; Ackerman, Thomas P.; Kato, Seiji; Min, Qilong

    2003-09-15

    A one-year observational study of overcast boundary layer stratus at the U.S. Department of Energy Atmospheric Radiation Measurement Program Southern Great Plains site illustrates that surface radiation is primarily sensitive to cloud liquid water path, with cloud drop effective radius having a secondary influence. The mean, median and standard deviation of cloud liquid water path and cloud drop effective radius for the dataset are 0.120 mm, 0.101 mm, 0.108 mm, and 7.38 {micro}m, 7.13 {micro}m, 2.39 {micro}m, respectively. Radiative transfer calculations demonstrate that cloud optical depth and cloud normalized forcing are respectively three and six times as sensitive to liquid water path variations as they are to effective radius variations, when the observed ranges of each of those variables is considered. Overall, there is a 79% correlation between observed and computed surface fluxes when using a fixed effective radius of 7.5 {micro}m and observed liquid water paths in the calculations. One conclusion from this study is that measurement of the indirect aerosol effect will be problematic at the site, as variations in cloud liquid water path will most likely mask effects of variations in particle size.

  14. A one-dimensional model describing aerosol formation and evolution in the stratosphere. I - Physical processes and mathematical analogs. II - Sensitivity studies and comparison with observations

    NASA Technical Reports Server (NTRS)

    Turco, R. P.; Hamill, P.; Toon, O. B.; Whitten, R. C.; Kiang, C. S.

    1979-01-01

    A new time-dependent one-dimensional model of the stratospheric sulfate aerosol layer is developed. The model treats atmospheric photochemistry and aerosol physics in detail and includes the interaction between gases and particles explicitly. It is shown that the numerical algorithms used in the model are quite precise. Sensitivity studies and comparison with observations are made. The simulated aerosol physics generates a particle layer with most of the observed properties. The sensitivity of the calculated properties to changes in a large number of aeronomic aerosol parameters is discussed in some detail. The sensitivity analysis reveals areas where the aerosol model is most uncertain. New observations are suggested that might help resolve important questions about the origin of the stratospheric aerosol layer.

  15. CC/DFT Route toward Accurate Structures and Spectroscopic Features for Observed and Elusive Conformers of Flexible Molecules: Pyruvic Acid as a Case Study.

    PubMed

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Cimino, Paola; Penocchio, Emanuele; Puzzarini, Cristina

    2015-09-01

    The structures and relative stabilities as well as the rotational and vibrational spectra of the three low-energy conformers of pyruvic acid (PA) have been characterized using a state-of-the-art quantum-mechanical approach designed for flexible molecules. By making use of the available experimental rotational constants for several isotopologues of the most stable PA conformer, Tc-PA, the semiexperimental equilibrium structure has been derived. The latter provides a reference for the pure theoretical determination of the equilibrium geometries for all conformers, thus confirming for these structures an accuracy of 0.001 Å and 0.1 deg for bond lengths and angles, respectively. Highly accurate relative energies of all conformers (Tc-, Tt-, and Ct-PA) and of the transition states connecting them are provided along with the thermodynamic properties at low and high temperatures, thus leading to conformational enthalpies accurate to 1 kJ mol(-1). Concerning microwave spectroscopy, rotational constants accurate to about 20 MHz are provided for the Tt- and Ct-PA conformers, together with the computed centrifugal-distortion constants and dipole moments required to simulate their rotational spectra. For Ct-PA, vibrational frequencies in the mid-infrared region accurate to 10 cm(-1) are reported along with theoretical estimates for the transitions in the near-infrared range, and the corresponding infrared spectrum including fundamental transitions, overtones, and combination bands has been simulated. In addition to the new data described above, theoretical results for the Tc- and Tt-PA conformers are compared with all available experimental data to further confirm the accuracy of the hybrid coupled-cluster/density functional theory (CC/DFT) protocol applied in the present study. Finally, we discuss in detail the accuracy of computational models fully based on double-hybrid DFT functionals (mainly at the B2PLYP/aug-cc-pVTZ level) that avoid the use of very expensive CC

  16. Learning to Describe, Describing to Understand

    ERIC Educational Resources Information Center

    Knoester, Matthew

    2008-01-01

    In this essay, the author describes his understanding and experience with descriptive review processes, as developed by Patricia Carini (Himley 2000) and members of the Prospect Center in North Bennington, VT. The author critically reviews the benefits and limitations of using descriptive review as a form of assessment of students, teaching…

  17. A new coarse-grained model for E. coli cytoplasm: accurate calculation of the diffusion coefficient of proteins and observation of anomalous diffusion.

    PubMed

    Hasnain, Sabeeha; McClendon, Christopher L; Hsu, Monica T; Jacobson, Matthew P; Bandyopadhyay, Pradipta

    2014-01-01

    A new coarse-grained model of the E. coli cytoplasm is developed by describing the proteins of the cytoplasm as flexible units consisting of one or more spheres that follow Brownian dynamics (BD), with hydrodynamic interactions (HI) accounted for by a mean-field approach. Extensive BD simulations were performed to calculate the diffusion coefficients of three different proteins in the cellular environment. The results are in close agreement with experimental or previously simulated values, where available. Control simulations without HI showed that use of HI is essential to obtain accurate diffusion coefficients. Anomalous diffusion inside the crowded cellular medium was investigated with Fractional Brownian motion analysis, and found to be present in this model. By running a series of control simulations in which various forces were removed systematically, it was found that repulsive interactions (volume exclusion) are the main cause for anomalous diffusion, with a secondary contribution from HI. PMID:25180859

  18. CLARREO Cornerstone of the Earth Observing System: Measuring Decadal Change Through Accurate Emitted Infrared and Reflected Solar Spectra and Radio Occultation

    NASA Technical Reports Server (NTRS)

    Sandford, Stephen P.

    2010-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is one of four Tier 1 missions recommended by the recent NRC Decadal Survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that are used to enable validated long-term climate projections that become the foundation for informed decisions on mitigation and adaptation policies that address the effects of climate change on society. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change observations that are sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. These same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. For the first time CLARREO will make highly accurate, global, SI-traceable decadal change observations sensitive to the most critical, but least understood, climate forcings, responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy levels are determined so that climate trend signals can be detected against a background of naturally occurring variability. Climate system natural variability therefore determines what level of accuracy is overkill, and what level is critical to obtain. In this sense, the CLARREO mission requirements are considered optimal from a science value perspective. The accuracy for decadal change traceability to SI standards includes uncertainties associated with instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in

  19. Observed allocations of productivity and biomass, and turnover times in tropical forests are not accurately represented in CMIP5 Earth system models

    NASA Astrophysics Data System (ADS)

    Negrón-Juárez, Robinson I.; Koven, Charles D.; Riley, William J.; Knox, Ryan G.; Chambers, Jeffrey Q.

    2015-06-01

    A significant fraction of anthropogenic CO2 emissions is assimilated by tropical forests and stored as biomass, slowing the accumulation of CO2 in the atmosphere. Because different plant tissues have different functional roles and turnover times, predictions of carbon balance of tropical forests depend on how earth system models (ESMs) represent the dynamic allocation of productivity to different tree compartments. This study shows that observed allocation of productivity, biomass, and turnover times of main tree compartments (leaves, wood, and roots) are not accurately represented in Coupled Model Intercomparison Project Phase 5 ESMs. In particular, observations indicate that biomass saturates with increasing productivity. In contrast, most models predict continuous increases in biomass with increases in productivity. This bias may lead to an over-prediction of carbon uptake in response to CO2 or climate-driven changes in productivity. Compartment-specific productivity and biomass are useful benchmarks to assess terrestrial ecosystem model performance. Improvements in the predicted allocation patterns and turnover times by ESMs will reduce uncertainties in climate predictions.

  20. Describing Cognitive Structure.

    ERIC Educational Resources Information Center

    White, Richard T.

    This paper discusses questions pertinent to a definition of cognitive structure as the knowledge one possesses and the manner in which it is arranged, and considers how to select or devise methods of describing cognitive structure. The main purpose in describing cognitive structure is to see whether differences in memory (or cognitive structure)…

  1. Describe Your Favorite Teacher.

    ERIC Educational Resources Information Center

    Dill, Isaac; Dill, Vicky

    1993-01-01

    A third grader describes Ms. Gonzalez, his favorite teacher, who left to accept a more lucrative teaching assignment. Ms. Gonzalez' butterflies unit covered everything from songs about social butterflies to paintings of butterfly wings, anatomy studies, and student haiku poems and biographies. Students studied biology by growing popcorn plants…

  2. New described dermatological disorders.

    PubMed

    Gönül, Müzeyyen; Cevirgen Cemil, Bengu; Keseroglu, Havva Ozge; Kaya Akis, Havva

    2014-01-01

    Many advances in dermatology have been made in recent years. In the present review article, newly described disorders from the last six years are presented in detail. We divided these reports into different sections, including syndromes, autoinflammatory diseases, tumors, and unclassified disease. Syndromes included are "circumferential skin creases Kunze type" and "unusual type of pachyonychia congenita or a new syndrome"; autoinflammatory diseases include "chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature (CANDLE) syndrome," "pyoderma gangrenosum, acne, and hidradenitis suppurativa (PASH) syndrome," and "pyogenic arthritis, pyoderma gangrenosum, acne, and hidradenitis suppurativa (PAPASH) syndrome"; tumors include "acquired reactive digital fibroma," "onychocytic matricoma and onychocytic carcinoma," "infundibulocystic nail bed squamous cell carcinoma," and "acral histiocytic nodules"; unclassified disorders include "saurian papulosis," "symmetrical acrokeratoderma," "confetti-like macular atrophy," and "skin spicules," "erythema papulosa semicircularis recidivans." PMID:25243162

  3. New Described Dermatological Disorders

    PubMed Central

    Cevirgen Cemil, Bengu; Keseroglu, Havva Ozge; Kaya Akis, Havva

    2014-01-01

    Many advances in dermatology have been made in recent years. In the present review article, newly described disorders from the last six years are presented in detail. We divided these reports into different sections, including syndromes, autoinflammatory diseases, tumors, and unclassified disease. Syndromes included are “circumferential skin creases Kunze type” and “unusual type of pachyonychia congenita or a new syndrome”; autoinflammatory diseases include “chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature (CANDLE) syndrome,” “pyoderma gangrenosum, acne, and hidradenitis suppurativa (PASH) syndrome,” and “pyogenic arthritis, pyoderma gangrenosum, acne, and hidradenitis suppurativa (PAPASH) syndrome”; tumors include “acquired reactive digital fibroma,” “onychocytic matricoma and onychocytic carcinoma,” “infundibulocystic nail bed squamous cell carcinoma,” and “acral histiocytic nodules”; unclassified disorders include “saurian papulosis,” “symmetrical acrokeratoderma,” “confetti-like macular atrophy,” and “skin spicules,” “erythema papulosa semicircularis recidivans.” PMID:25243162

  4. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  5. Some properties of negative cloud-to-ground flashes from observations of a local thunderstorm based on accurate-stroke-count studies

    NASA Astrophysics Data System (ADS)

    Zhu, Baoyou; Ma, Ming; Xu, Weiwei; Ma, Dong

    2015-12-01

    Properties of negative cloud-to-ground (CG) lightning flashes, in terms of number of strokes per flash, inter-stroke intervals and the relative intensity of subsequent and first strokes, were presented by accurate-stroke-count studies based on all 1085 negative flashes from a local thunderstorm. The percentage of single-stroke flashes and stroke multiplicity evolved significantly during the whole life cycle of the study thunderstorm. The occurrence probability of negative CG flashes decreased exponentially with the increasing number of strokes per flash. About 30.5% of negative CG flashes contained only one stroke and number of strokes per flash averaged 3.3. In a subset of 753 negative multiple-stroke flashes, about 41.4% contained at least one subsequent stroke stronger than the corresponding first stroke. Subsequent strokes tended to decrease in strength with their orders and the ratio of subsequent to first stroke peaks presented a geometric mean value of 0.52. Interestingly, negative CG flashes of higher multiplicity tended to have stronger initial strokes. 2525 inter-stroke intervals showed a more or less log-normal distribution and gave a geometric mean value of 62 ms. For CG flashes of particular multiplicity geometric mean inter-stroke intervals tended to decrease with the increasing number of strokes per flash, while those intervals associated with higher order strokes tended to be larger than those associated with low order strokes.

  6. Masses of the components of SB2 binaries observed with Gaia - III. Accurate SB2 orbits for 10 binaries and masses of HIP 87895

    NASA Astrophysics Data System (ADS)

    Kiefer, F.; Halbwachs, J.-L.; Arenou, F.; Pourbaix, D.; Famaey, B.; Guillout, P.; Lebreton, Y.; Nebot Gómez-Morán, A.; Mazeh, T.; Salomon, J.-B.; Soubiran, C.; Tal-Or, L.

    2016-05-01

    In anticipation of the Gaia astrometric mission, a large sample of spectroscopic binaries has been observed since 2010 with the Spectrographe pour l'Observation des PHénomènes des Intérieurs Stellaires et des Exoplanètes spectrograph at the Haute-Provence Observatory. Our aim is to derive the orbital elements of double-lined spectroscopic binaries (SB2s) with an accuracy sufficient to finally obtain the masses of the components with relative errors as small as 1 per cent when the astrometric measurements of Gaia are taken into account. In this paper, we present the results from five years of observations of 10 SB2 systems with periods ranging from 37 to 881 d. Using the TODMOR algorithm, we computed radial velocities from the spectra, and then derived the orbital elements of these binary systems. The minimum masses of the components are then obtained with an accuracy better than 1.2 per cent for the 10 binaries. Combining the radial velocities with existing interferometric measurements, we derived the masses of the primary and secondary components of HIP 87895 with an accuracy of 0.98 and 1.2 per cent, respectively.

  7. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  8. Accomplishments of the MUSICA project to provide accurate, long-term, global and high-resolution observations of tropospheric {H2O,δD} pairs - a review

    NASA Astrophysics Data System (ADS)

    Schneider, Matthias; Wiegele, Andreas; Barthlott, Sabine; González, Yenny; Christner, Emanuel; Dyroff, Christoph; García, Omaira E.; Hase, Frank; Blumenstock, Thomas; Sepúlveda, Eliezer; Mengistu Tsidu, Gizaw; Takele Kenea, Samuel; Rodríguez, Sergio; Andrey, Javier

    2016-07-01

    In the lower/middle troposphere, {H2O,δD} pairs are good proxies for moisture pathways; however, their observation, in particular when using remote sensing techniques, is challenging. The project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water) addresses this challenge by integrating the remote sensing with in situ measurement techniques. The aim is to retrieve calibrated tropospheric {H2O,δD} pairs from the middle infrared spectra measured from ground by FTIR (Fourier transform infrared) spectrometers of the NDACC (Network for the Detection of Atmospheric Composition Change) and the thermal nadir spectra measured by IASI (Infrared Atmospheric Sounding Interferometer) aboard the MetOp satellites. In this paper, we present the final MUSICA products, and discuss the characteristics and potential of the NDACC/FTIR and MetOp/IASI {H2O,δD} data pairs. First, we briefly resume the particularities of an {H2O,δD} pair retrieval. Second, we show that the remote sensing data of the final product version are absolutely calibrated with respect to H2O and δD in situ profile references measured in the subtropics, between 0 and 7 km. Third, we reveal that the {H2O,δD} pair distributions obtained from the different remote sensors are consistent and allow distinct lower/middle tropospheric moisture pathways to be identified in agreement with multi-year in situ references. Fourth, we document the possibilities of the NDACC/FTIR instruments for climatological studies (due to long-term monitoring) and of the MetOp/IASI sensors for observing diurnal signals on a quasi-global scale and with high horizontal resolution. Fifth, we discuss the risk of misinterpreting {H2O,δD} pair distributions due to incomplete processing of the remote sensing products.

  9. Model describes subsea control dynamics

    SciTech Connect

    Not Available

    1988-02-01

    A mathematical model of the hydraulic control systems for subsea completions and their umbilicals has been developed and applied successfully to Jabiru and Challis field production projects in the Timor Sea. The model overcomes the limitations of conventional linear steady state models and yields for the hydraulic system an accurate description of its dynamic response, including the valve shut-in times and the pressure transients. Results of numerical simulations based on the model are in good agreement with measurements of the dynamic response of the tree valves and umbilicals made during land testing.

  10. Utilizing prospective sequence analysis of SHH, ZIC2, SIX3 and TGIF in holoprosencephaly probands to describe the parameters limiting the observed frequency of mutant gene x gene interactions

    PubMed Central

    Roessler, Erich; Vélez, Jorge I.; Zhou, Nan; Muenke, Maximilian

    2012-01-01

    Clinical molecular diagnostic centers routinely screen SHH, ZIC2, SIX3 and TGIF for mutations that can help to explain holoprosencephaly and related brain malformations. Here we report a prospective Sanger sequence analysis of 189 unrelated probands referred to our diagnostic lab for genetic testing. We identified 28 novel unique mutations in this group (15%) and no instances of deleterious mutations in two genes in the same subject. Our result extends that of other diagnostic centers and suggests that among the aggregate 475 prospectively sequenced holoprosencephaly probands there is negligible evidence for direct gene-gene interactions among these tested genes. We model the predictions of the observed mutation frequency in the context of the hypothesis that gene x gene interactions are a prerequisite for forebrain malformations, i.e. the “multiple-hit” hypothesis. We conclude that such a direct interaction would be expected to be rare and that more subtle genetic and environmental interactions are a better explanation for the clinically observed inter- and intra-familial variability. PMID:22310223

  11. A Fibre-Reinforced Poroviscoelastic Model Accurately Describes the Biomechanical Behaviour of the Rat Achilles Tendon

    PubMed Central

    Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna

    2015-01-01

    Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon’s viscoelastic response. In conclusion, this model can capture the repetitive loading and unloading behaviour of intact and healthy Achilles tendons, which is a critical first step towards understanding tendon homeostasis and function as this biomechanical response changes in diseased tendons. PMID:26030436

  12. How accurately can the microcanonical ensemble describe small isolated quantum systems?

    NASA Astrophysics Data System (ADS)

    Ikeda, Tatsuhiko N.; Ueda, Masahito

    2015-08-01

    We numerically investigate quantum quenches of a nonintegrable hard-core Bose-Hubbard model to test the accuracy of the microcanonical ensemble in small isolated quantum systems. We show that, in a certain range of system size, the accuracy increases with the dimension of the Hilbert space D as 1 /D . We ascribe this rapid improvement to the absence of correlations between many-body energy eigenstates. Outside of that range, the accuracy is found to scale either as 1 /√{D } or algebraically with the system size.

  13. A Simple Three Pool Model Accurately Describes Patterns of Long Term Litter Decomposition in Diverse Climates

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The ability of ecosystems to sequester carbon (C) is largely dependent on how global changes in climate will alter the balance between rates of decomposition and net primary production. The response of primary production to changes in climate has been examined using reasonably well-validated mechan...

  14. Describing Control in Educational Organizations.

    ERIC Educational Resources Information Center

    Renihan, P. J.; Renihan, F. I.

    This paper describes the construction and application of a framework to investigate control at the policy-making level in education. The minutes of the regular meetings of 21 school boards in British Columbia were analyzed for the period January to December of 1975. Construction of the framework involved (1) definitions of control and…

  15. How to describe disordered structures

    NASA Astrophysics Data System (ADS)

    Nishio, Kengo; Miyazaki, Takehide

    2016-04-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way.

  16. How to describe disordered structures.

    PubMed

    Nishio, Kengo; Miyazaki, Takehide

    2016-01-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way. PMID:27064833

  17. How to describe disordered structures

    PubMed Central

    Nishio, Kengo; Miyazaki, Takehide

    2016-01-01

    Disordered structures such as liquids and glasses, grains and foams, galaxies, etc. are often represented as polyhedral tilings. Characterizing the associated polyhedral tiling is a promising strategy to understand the disordered structure. However, since a variety of polyhedra are arranged in complex ways, it is challenging to describe what polyhedra are tiled in what way. Here, to solve this problem, we create the theory of how the polyhedra are tiled. We first formulate an algorithm to convert a polyhedron into a codeword that instructs how to construct the polyhedron from its building-block polygons. By generalizing the method to polyhedral tilings, we describe the arrangements of polyhedra. Our theory allows us to characterize polyhedral tilings, and thereby paves the way to study from short- to long-range order of disordered structures in a systematic way. PMID:27064833

  18. Describing Young Children's Deductive Reasoning.

    ERIC Educational Resources Information Center

    Reid, David A.

    This paper reports results related to the development of a consistent descriptive language for research on mathematical reasoning. Ways of reasoning deductively are highlighted, using examples drawn from observations of young students. One-step deductions versus multi-step deductions, known versus hypothetical premises, and single versus multiple…

  19. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  20. Large robotized turning centers described

    NASA Astrophysics Data System (ADS)

    Kirsanov, V. V.; Tsarenko, V. I.

    1985-09-01

    The introduction of numerical control (NC) machine tools has made it possible to automate machining in series and small series production. The organization of automated production sections merged NC machine tools with automated transport systems. However, both the one and the other require the presence of an operative at the machine for low skilled operations. Industrial robots perform a number of auxiliary operations, such as equipment loading-unloading and control, changing cutting and auxiliary tools, controlling workpieces and parts, and cleaning of location surfaces. When used with a group of equipment they perform transfer operations between the machine tools. Industrial robots eliminate the need for workers to form auxiliary operations. This underscores the importance of developing robotized manufacturing centers providing for minimal human participation in production and creating conditions for two and three shift operation of equipment. Work carried out at several robotized manufacturing centers for series and small series production is described.

  1. Five Describing Factors of Dyslexia.

    PubMed

    Tamboer, Peter; Vorst, Harrie C M; Oort, Frans J

    2016-09-01

    Two subtypes of dyslexia (phonological, visual) have been under debate in various studies. However, the number of symptoms of dyslexia described in the literature exceeds the number of subtypes, and underlying relations remain unclear. We investigated underlying cognitive features of dyslexia with exploratory and confirmatory factor analyses. A sample of 446 students (63 with dyslexia) completed a large test battery and a large questionnaire. Five factors were found in both the test battery and the questionnaire. These 10 factors loaded on 5 latent factors (spelling, phonology, short-term memory, rhyme/confusion, and whole-word processing/complexity), which explained 60% of total variance. Three analyses supported the validity of these factors. A confirmatory factor analysis fit with a solution of five factors (RMSEA = .03). Those with dyslexia differed from those without dyslexia on all factors. A combination of five factors provided reliable predictions of dyslexia and nondyslexia (accuracy >90%). We also looked for factorial deficits on an individual level to construct subtypes of dyslexia, but found varying profiles. We concluded that a multiple cognitive deficit model of dyslexia is supported, whereas the existence of subtypes remains unclear. We discussed the results in relation to advanced compensation strategies of students, measures of intelligence, and various correlations within groups of those with and without dyslexia. PMID:25398549

  2. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  3. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. Quantum formalism to describe binocular rivalry.

    PubMed

    Manousakis, Efstratios

    2009-11-01

    On the basis of the general character and operation of the process of perception, a formalism is sought to mathematically describe the subjective or abstract/mental process of perception. It is shown that the formalism of orthodox quantum theory of measurement, where the observer plays a key role, is a broader mathematical foundation which can be adopted to describe the dynamics of the subjective experience. The mathematical formalism describes the psychophysical dynamics of the subjective or cognitive experience as communicated to us by the subject. Subsequently, the formalism is used to describe simple perception processes and, in particular, to describe the probability distribution of dominance duration obtained from the testimony of subjects experiencing binocular rivalry. Using this theory and parameters based on known values of neuronal oscillation frequencies and firing rates, the calculated probability distribution of dominance duration of rival states in binocular rivalry under various conditions is found to be in good agreement with available experimental data. This theory naturally explains an observed marked increase in dominance duration in binocular rivalry upon periodic interruption of stimulus and yields testable predictions for the distribution of perceptual alteration in time. PMID:19520143

  5. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  6. CRITICAL ELEMENTS IN DESCRIBING AND UNDERSTANDING OUR NATION'S AQUATIC RESOURCES

    EPA Science Inventory

    Despite spending $115 billion per year on environmental actions in the United States, we have only a limited ability to describe the effectiveness of these expenditures. Moreover, after decades of such investments, we cannot accurately describe status and trends in the nation's a...

  7. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  8. [Who really first described lesser blood circulation?].

    PubMed

    Masić, Izet; Dilić, Mirza

    2007-01-01

    Today, at least 740 years since professor and director of the Al Mansouri Hospital in Cairo Ibn al-Nafis (1210-1288), in his paper about pulse described small (pulmonary) blood circulatory system. At the most popular web search engines very often we can find its name, especially in English language. Majority of quotes about Ibn Nefis are on Arabic or Turkish language, although Ibn Nefis discovery is of world wide importance. Author Masić I. (1993) is among rare ones who in some of the indexed journals emphasized of that event, and on that debated also some authors from Great Britain and USA in the respectable magazine Annals of Internal Medicine. Citations in majority mentioning other two "describers" or "discoverers" of pulmonary blood circulation, Michael Servetus (1511-1553), physician and theologist, and William Harvey (1578-1657), which in his paper "Exercitatio anatomica de motu cordis et sanguinis in animalibus" published in 1628 described blood circulatory system. Ibn Nefis is due to its scientific work called "Second Avicenna". Some of his papers, during centuries were translated into Latin, and some published as a reprint in Arabic language. Professor Fuat Sezgin from Frankfurt published a compendium of Ibn Nefis papers in 1997. Also, Masić I. (1997) has published one monography about Ibn Nefis. Importance of Ibn Nefis epochal discovery is the fact that it is solely based on deductive impressions, because his description of the small circulation is not occurred by observation on corps during section. It is known that he did not pay attention to the Galen's theories about blood circulation. His prophecy sentence say: "If I don't know that my work will not last up to ten thousand years after me, I would not write them". Sapient sat. PMID:21553447

  9. Accurate and occlusion-robust multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  10. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  11. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  12. Describing functions for nonlinear optical systems.

    PubMed

    Ghosh, A K

    1997-10-10

    The concept of describing functions is useful for analyzing and designing nonlinear systems. A proposal for using the idea of describing functions for studying the behavior of a nonlinear optical processing system is given. The describing function can be used in the same way that a coherent transfer function or optical transfer function is used to characterize linear, shift-invariant optical processors. Two coherent optical systems for measuring the magnitude of the describing function of nonlinear optical processors are suggested. PMID:18264243

  13. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  14. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  15. Cellular automata to describe seismicity: A review

    NASA Astrophysics Data System (ADS)

    Jiménez, Abigail

    2013-12-01

    Cellular Automata have been used in the literature to describe seismicity. We first historically introduce Cellular Automata and provide some important definitions. Then we proceed to review the most important models, most of them being variations of the spring-block model proposed by Burridge and Knopoff, and describe the most important results obtained from them. We discuss the relation with criticality and also describe some models that try to reproduce real data.

  16. A two-parameter kinetic model based on a time-dependent activity coefficient accurately describes enzymatic cellulose digestion

    PubMed Central

    Kostylev, Maxim; Wilson, David

    2014-01-01

    Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567

  17. Venus general atmosphere circulation described by Pioneer

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The predominant weather pattern for Venus is described. Wind directions and wind velocities are given. Possible driving forces of the winds are presented and include solar heating, planetary rotation, and the greenhouse effect.

  18. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  19. Describing content in middle school science curricula

    NASA Astrophysics Data System (ADS)

    Schwarz-Ballard, Jennifer A.

    As researchers and designers, we intuitively recognize differences between curricula and describe them in terms of design strategy: project-based, laboratory-based, modular, traditional, and textbook, among others. We assume that practitioners recognize the differences in how each requires that students use knowledge, however these intuitive differences have not been captured or systematically described by the existing languages for describing learning goals. In this dissertation I argue that we need new ways of capturing relationships among elements of content, and propose a theory that describes some of the important differences in how students reason in differently designed curricula and activities. Educational researchers and curriculum designers have taken a variety of approaches to laying out learning goals for science. Through an analysis of existing descriptions of learning goals I argue that to describe differences in the understanding students come away with, they need to (1) be specific about the form of knowledge, (2) incorporate both the processes through which knowledge is used and its form, and (3) capture content development across a curriculum. To show the value of inquiry curricula, learning goals need to incorporate distinctions among the variety of ways we ask students to use knowledge. Here I propose the Epistemic Structures Framework as one way to describe differences in students reasoning that are not captured by existing descriptions of learning goals. The usefulness of the Epistemic Structures framework is demonstrated in the four curriculum case study examples in Part II of this work. The curricula in the case studies represent a range of content coverage, curriculum structure, and design rationale. They serve both to illustrate the Epistemic Structures analysis process and make the case that it does in fact describe learning goals in a way that captures important differences in students reasoning in differently designed curricula

  20. Audio-Described Educational Materials: Ugandan Teachers' Experiences

    ERIC Educational Resources Information Center

    Wormnaes, Siri; Sellaeg, Nina

    2013-01-01

    This article describes and discusses a qualitative, descriptive, and exploratory study of how 12 visually impaired teachers in Uganda experienced audio-described educational video material for teachers and student teachers. The study is based upon interviews with these teachers and observations while they were using the material either…

  1. Sensorimotor Interference When Reasoning About Described Environments

    NASA Astrophysics Data System (ADS)

    Avraamides, Marios N.; Kyranidou, Melina-Nicole

    The influence of sensorimotor interference was examined in two experiments that compared pointing with iconic arrows and verbal responding in a task that entailed locating target-objects from imagined perspectives. Participants studied text narratives describing objects at locations around them in a remote environment and then responded to targets from memory. Results revealed only minor differences between the two response modes suggesting that bodily cues do not exert severe detrimental interference on spatial reasoning from imagined perspective when non-immediate described environments are used. The implications of the findings are discussed.

  2. Recently described neoplasms of the sinonasal tract.

    PubMed

    Bishop, Justin A

    2016-03-01

    Surgical pathology of the sinonasal region (i.e., nasal cavity and the paranasal sinuses) is notoriously difficult, due in part to the remarkable diversity of neoplasms that may be encountered in this area. In addition, a number of neoplasms have been only recently described in the sinonasal tract, further compounding the difficulty for pathologists who are not yet familiar with them. This manuscript will review the clinicopathologic features of some of the recently described sinonasal tumor types: NUT midline carcinoma, HPV-related carcinoma with adenoid cystic-like features, SMARCB1 (INI-1) deficient sinonasal carcinoma, biphenotypic sinonasal sarcoma, and adamantinoma-like Ewing family tumor. PMID:26776744

  3. USING TRACERS TO DESCRIBE NAPL HETEROGENEITY

    EPA Science Inventory

    Tracers are frequently used to estimate both the average travel time for water flow through the tracer swept volume and NAPL saturation. The same data can be used to develop a statistical distribution describing the hydraulic conductivity in the sept volume and a possible distri...

  4. Describing Technological Paradigm Transitions: A Methodological Exploration.

    ERIC Educational Resources Information Center

    Wallace, Danny P.; Van Fleet, Connie

    1997-01-01

    Presents a humorous treatment of the "sessio taurino" (or humanistic inquiry) technique for describing changes in technological models. The fundamental tool of "sessio taurino" is a loosely-structured event known as the session, which is of indeterminate length, involves a flexible number of participants, and utilizes a preundetermined set of…

  5. Is the Water Heating Curve as Described?

    ERIC Educational Resources Information Center

    Riveros, H. G.; Oliva, A. I.

    2008-01-01

    We analysed the heating curve of water which is described in textbooks. An experiment combined with some simple heat transfer calculations is discussed. The theoretical behaviour can be altered by changing the conditions under which the experiment is modelled. By identifying and controlling the different parameters involved during the heating…

  6. How Digital Native Learners Describe Themselves

    ERIC Educational Resources Information Center

    Thompson, Penny

    2015-01-01

    Eight university students from the "digital native" generation were interviewed about the connections they saw between technology use and learning, and also their reactions to the popular press claims about their generation. Themes that emerged from the interviews were coded to show patterns in how digital natives describe themselves.…

  7. CANDLE syndrome: a recently described autoinflammatory syndrome.

    PubMed

    Tüfekçi, Özlem; Bengoa, ŞebnemYilmaz; Karapinar, Tuba Hilkay; Ataseven, Eda Büke; İrken, Gülersu; Ören, Hale

    2015-05-01

    CANDLE syndrome (chronic atypical neutrophilic dermatosis with lipodystrophy and elevated temperature) is a recently described autoinflammatory syndrome characterized by early onset, recurrent fever, skin lesions, and multisystemic inflammatory manifestations. Most of the patients have been shown to have mutation in PSMB8 gene. Herein, we report a 2-year-old patient with young onset recurrent fever, atypical facies, widespread skin lesions, generalized lymphadenopathy, hepatosplenomegaly, joint contractures, hypertrglyceridemia, lipodystrophy, and autoimmune hemolytic anemia. Clinical features together with the skin biopsy findings were consistent with the CANDLE syndrome. The pathogenesis and treatment of this syndrome have not been fully understood. Increased awareness of this recently described syndrome may lead to recognition of new cases and better understanding of its pathogenesis which in turn may help for development of an effective treatment. PMID:25036278

  8. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  9. LiveDescribe: Can Amateur Describers Create High-Quality Audio Description?

    ERIC Educational Resources Information Center

    Branje, Carmen J.; Fels, Deborah I.

    2012-01-01

    Introduction: The study presented here evaluated the usability of the audio description software LiveDescribe and explored the acceptance rates of audio description created by amateur describers who used LiveDescribe to facilitate the creation of their descriptions. Methods: Twelve amateur describers with little or no previous experience with…

  10. A Method for Describing Preschoolers' Activity Preferences

    ERIC Educational Resources Information Center

    Hanley, Gregory P.; Cammilleri, Anthony P.; Tiger, Jeffrey H.; Ingvarsson, Einar T.

    2007-01-01

    We designed a series of analyses to develop a measurement system capable of simultaneously recording the free-play patterns of 20 children in a preschool classroom. Study 1 determined the intermittency with which the location and engagement of each child could be momentarily observed before the accuracy of the measurement was compromised. Results…

  11. Is an eclipse described in the Odyssey?

    PubMed Central

    Baikouzis, Constantino; Magnasco, Marcelo O.

    2008-01-01

    Plutarch and Heraclitus believed a certain passage in the 20th book of the Odyssey (“Theoclymenus's prophecy”) to be a poetic description of a total solar eclipse. In the late 1920s, Schoch and Neugebauer computed that the solar eclipse of 16 April 1178 B.C.E. was total over the Ionian Islands and was the only suitable eclipse in more than a century to agree with classical estimates of the decade-earlier sack of Troy around 1192–1184 B.C.E. However, much skepticism remains about whether the verses refer to this, or any, eclipse. To contribute to the issue independently of the disputed eclipse reference, we analyze other astronomical references in the Epic, without assuming the existence of an eclipse, and search for dates matching the astronomical phenomena we believe they describe. We use three overt astronomical references in the epic: to Boötes and the Pleiades, Venus, and the New Moon; we supplement them with a conjectural identification of Hermes's trip to Ogygia as relating to the motion of planet Mercury. Performing an exhaustive search of all possible dates in the span 1250–1115 B.C., we looked to match these phenomena in the order and manner that the text describes. In that period, a single date closely matches our references: 16 April 1178 B.C.E. We speculate that these references, plus the disputed eclipse reference, may refer to that specific eclipse. PMID:18577587

  12. Is an eclipse described in the Odyssey?

    PubMed

    Baikouzis, Constantino; Magnasco, Marcelo O

    2008-07-01

    Plutarch and Heraclitus believed a certain passage in the 20th book of the Odyssey ("Theoclymenus's prophecy") to be a poetic description of a total solar eclipse. In the late 1920s, Schoch and Neugebauer computed that the solar eclipse of 16 April 1178 B.C.E. was total over the Ionian Islands and was the only suitable eclipse in more than a century to agree with classical estimates of the decade-earlier sack of Troy around 1192-1184 B.C.E. However, much skepticism remains about whether the verses refer to this, or any, eclipse. To contribute to the issue independently of the disputed eclipse reference, we analyze other astronomical references in the Epic, without assuming the existence of an eclipse, and search for dates matching the astronomical phenomena we believe they describe. We use three overt astronomical references in the epic: to Boötes and the Pleiades, Venus, and the New Moon; we supplement them with a conjectural identification of Hermes's trip to Ogygia as relating to the motion of planet Mercury. Performing an exhaustive search of all possible dates in the span 1250-1115 B.C., we looked to match these phenomena in the order and manner that the text describes. In that period, a single date closely matches our references: 16 April 1178 B.C.E. We speculate that these references, plus the disputed eclipse reference, may refer to that specific eclipse. PMID:18577587

  13. Describing Story Evolution from Dynamic Information Streams

    SciTech Connect

    Rose, Stuart J.; Butner, R. Scott; Cowley, Wendy E.; Gregory, Michelle L.; Walker, Julia

    2009-10-12

    Sources of streaming information, such as news syndicates, publish information continuously. Information portals and news aggregators list the latest information from around the world enabling information consumers to easily identify events in the past 24 hours. The volume and velocity of these streams causes information from prior days’ to quickly vanish despite its utility in providing an informative context for interpreting new information. Few capabilities exist to support an individual attempting to identify or understand trends and changes from streaming information over time. The burden of retaining prior information and integrating with the new is left to the skills, determination, and discipline of each individual. In this paper we present a visual analytics system for linking essential content from information streams over time into dynamic stories that develop and change over multiple days. We describe particular challenges to the analysis of streaming information and explore visual representations for showing story change and evolution over time.

  14. Does Guru Granth Sahib describe depression?

    PubMed Central

    Kalra, Gurvinder; Bhui, Kamaldeep; Bhugra, Dinesh

    2013-01-01

    Sikhism is a relatively young religion, with Guru Granth Sahib as its key religious text. This text describes emotions in everyday life, such as happiness, sadness, anger, hatred, and also more serious mental health issues such as depression and psychosis. There are references to the causation of these emotional disturbances and also ways to get out of them. We studied both the Gurumukhi version and the English translation of the Guru Granth Sahib to understand what it had to say about depression, its henomenology, and religious prescriptions for recovery. We discuss these descriptions in this paper and understand its meaning within the context of clinical depression. Such knowledge is important as explicit descriptions about depression and sadness can help encourage culturally appropriate assessment and treatment, as well as promote public health through education. PMID:23858254

  15. Stimulated recall interviews for describing pragmatic epistemology

    NASA Astrophysics Data System (ADS)

    Shubert, Christopher W.; Meredith, Dawn C.

    2015-12-01

    Students' epistemologies affect how and what they learn: do they believe physics is a list of equations, or a coherent and sensible description of the physical world? In order to study these epistemologies as part of curricular assessment, we adopt the resources framework, which posits that students have many productive epistemological resources that can be brought to bear as they learn physics. In previous studies, these epistemologies have been either inferred from behavior in learning contexts or probed through surveys or interviews outside of the learning context. We argue that stimulated recall interviews provide a contextually and interpretively valid method to access students' epistemologies that complement existing methods. We develop a stimulated recall interview methodology to assess a curricular intervention and find evidence that epistemological resources aptly describe student epistemologies.

  16. Simplified stock markets described by number operators

    NASA Astrophysics Data System (ADS)

    Bagarello, F.

    2009-06-01

    In this paper we continue our systematic analysis of the operatorial approach previously proposed in an economical context and we discuss a mixed toy model of a simplified stock market, i.e. a model in which the price of the shares is given as an input. We deduce the time evolution of the portfolio of the various traders of the market, as well as of other observable quantities. As in a previous paper, we solve the equations of motion by means of a fixed point like approximation.

  17. Thermodynamic model to describe miscibility in complex fluid systems

    SciTech Connect

    Guerrero, M.I.

    1982-01-01

    In the basic studies of tertiary oil recovery, it is necessary to describe the phase diagrams of mixtures of hydrocarbons, surfactants and brine. It has been observed that certain features of those phase diagrams, such as the appearance of 3-phase regions, can be correlated to ultra-low interfacial tensions. In this work, a simple thermodynamic model is described. The phase diagram obtained is qualitatively identical to that of real, more complex systems. 13 references.

  18. Describing the chemical character of a magma

    NASA Astrophysics Data System (ADS)

    Duley, Soma; Vigneresse, Jean-Louis; Chattaraj, Pratim K.

    2010-05-01

    We introduce the concepts of hard-soft acid-base (HSAB) and derive parameters to characterize a magma that consists either of a solid rock, a melt or its exsolved gaseous phase. Those parameters are the electronegativity, hardness, electrophilicity, polarisability and optical basicity. They determine the chemical reactivity of each component individually, or its equivalence in the case of a complex system of elements or oxides. This results from equalization methods or from direct computation through density functional theory (DFT). Those global parameters help in characterizing magma, provide insights into the reactivity of the melt or its fluid phase when in contact with another magma, or when considering the affinity of each component for metals. In particular, the description leads to a better understanding on the mechanisms that control metal segregation and transportation during igneous activity. The trends observed during magma evolution, whether they follow a mafic or a felsic trend are also observed using these parameters and can be interpreted as approaching a greater stability. Nevertheless, the trend for felsic magma occurs at constant electrophilicity toward a silica pole of great hardness. Conversely, mafic magmas evolve at a constant hardness and decreasing electrophilicity

  19. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  20. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  1. Plans should abstractly describe intended behavior

    SciTech Connect

    Pfleger, K.; Hayes-Roth, B.

    1996-12-31

    Planning is the process of formulating a potential course of action. How courses of action (plans) produced by a planning module are represented and how they are used by execution-oriented modules of a complex agent to influence or dictate behavior are critical architectural issues. In contrast to the traditional model of plans as executable programs that dictate precise behaviors, we claim that autonomous agents inhabiting dynamic, unpredictable environments can make better use of plans that only abstractly describe their intended behavior. Such plans only influence or constrain behavior, rather than dictating it. This idea has been discussed in a variety of contexts, but it is seldom incorporated into working complex agents. Experiments involving instantiations of our Adaptive Intelligent Systems architecture in a variety of domains have demonstrated the generality and usefulness of the approach, even with our currently simple plan representation and mechanisms for plan following. The behavioral benefits include (1) robust improvisation of goal-directed behavior in response to dynamic situations, (2) ready exploitation of dynamically acquired knowledge or behavioral capabilities, and (3) adaptation based on dynamic aspects of coordinating diverse behaviors to achieve multiple goals. In addition to these run-time advantages, the approach has useful implications for the design and configuration of agents. Indeed, the core ideas of the approach are natural extensions of fundamental ideas in software engineering.

  2. Canada issues booklet describing acid rain

    NASA Astrophysics Data System (ADS)

    A booklet recently released by Environment Canada describes acid rain in terms easily understood by the general public. Although Acid Rain — The Facts tends somewhat to give the Canadian side of this intercountry controversial subject, it nevertheless presents some very interesting, simple statistics of interest to people in either the U.S. or Canada. Copies of the booklet can be obtained from Inquiry Environment Canada, Ottawa, Ontario K1A OH3, Canada, tel. 613-997-2800.The booklet points out that acid rain is caused by emissions of sulfur dioxide (SO2) and nitrogen oxides (NOx). Once released into the atmosphere, these substances can be carried long distances by prevailing winds and return to Earth as acidic rain, snow, fog, or dust. The main sources of SO2 emissions in North America are coal-fired power generating stations and nonferrous ore smelters. The main sources of NOx emissions are vehicles and fuel combustion. From economical and environmental viewpoints, Canada believes acid rain is one of the most serious problems presently facing the country: increasing the acidity of more than 20% of Canada's 300,000 lakes to the point that aquatic life is depleted and acidity of soil water and shallow groundwater is increasing, causing decline in forest growth and water fowl populations, and eating away at buildings and monuments. Acid rain is endangering fisheries, tourism, agriculture, and forest resources in an area of 2.6 million km2 (one million square miles) of eastern Canada, about 8% of Canada's gross national product.

  3. Using Metaphorical Models for Describing Glaciers

    NASA Astrophysics Data System (ADS)

    Felzmann, Dirk

    2014-11-01

    To date, there has only been little conceptual change research regarding conceptions about glaciers. This study used the theoretical background of embodied cognition to reconstruct different metaphorical concepts with respect to the structure of a glacier. Applying the Model of Educational Reconstruction, the conceptions of students and scientists regarding glaciers were analysed. Students' conceptions were the result of teaching experiments whereby students received instruction about glaciers and ice ages and were then interviewed about their understandings. Scientists' conceptions were based on analyses of textbooks. Accordingly, four conceptual metaphors regarding the concept of a glacier were reconstructed: a glacier is a body of ice; a glacier is a container; a glacier is a reflexive body and a glacier is a flow. Students and scientists differ with respect to in which context they apply each conceptual metaphor. It was observed, however, that students vacillate among the various conceptual metaphors as they solve tasks. While the subject context of the task activates a specific conceptual metaphor, within the discussion about the solution, the students were able to adapt their conception by changing the conceptual metaphor. Educational strategies for teaching students about glaciers require specific language to activate the appropriate conceptual metaphors and explicit reflection regarding the various conceptual metaphors.

  4. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  5. Estimating Percent of Time and Rate Via Direct Observation: A Suggested Observational Procedure and Format.

    ERIC Educational Resources Information Center

    Saudargas, Richard A.; Lentz, Frances E., Jr.

    1986-01-01

    Using development of a State Event Observation System as an example, the decision rules and procedures for the constructing of standardized multiple behavior observational systems that provide accurate, reliable data for school-based assessment, intervention, and research are described. Reliability and validity data from the SECOS are provided.…

  6. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  7. Accurate wavelength calibration method for flat-field grating spectrometers.

    PubMed

    Du, Xuewei; Li, Chaoyang; Xu, Zhe; Wang, Qiuping

    2011-09-01

    A portable spectrometer prototype is built to study wavelength calibration for flat-field grating spectrometers. An accurate calibration method called parameter fitting is presented. Both optical and structural parameters of the spectrometer are included in the wavelength calibration model, which accurately describes the relationship between wavelength and pixel position. Along with higher calibration accuracy, the proposed calibration method can provide information about errors in the installation of the optical components, which will be helpful for spectrometer alignment. PMID:21929865

  8. Describing dengue epidemics: Insights from simple mechanistic models

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2012-09-01

    We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.

  9. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  10. The remarkable ability of turbulence model equations to describe transition

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper demonstrates how well the k-omega turbulence model describes the nonlinear growth of flow instabilities from laminar flow into the turbulent flow regime. Viscous modifications are proposed for the k-omega model that yield close agreement with measurements and with Direct Numerical Simulation results for channel and pipe flow. These modifications permit prediction of subtle sublayer details such as maximum dissipation at the surface, k approximately y(exp 2) as y approaches 0, and the sharp peak value of k near the surface. With two transition specific closure coefficients, the model equations accurately predict transition for an incompressible flat-plate boundary layer. The analysis also shows why the k-epsilon model is so difficult to use for predicting transition.

  11. Nomenclature proposal to describe vocal fold motion impairment.

    PubMed

    Rosen, Clark A; Mau, Ted; Remacle, Marc; Hess, Markus; Eckel, Hans E; Young, VyVy N; Hantzakos, Anastasios; Yung, Katherine C; Dikkers, Frederik G

    2016-08-01

    The terms used to describe vocal fold motion impairment are confusing and not standardized. This results in a failure to communicate accurately and to major limitations of interpreting research studies involving vocal fold impairment. We propose standard nomenclature for reporting vocal fold impairment. Overarching terms of vocal fold immobility and hypomobility are rigorously defined. This includes assessment techniques and inclusion and exclusion criteria for determining vocal fold immobility and hypomobility. In addition, criteria for use of the following terms have been outlined in detail: vocal fold paralysis, vocal fold paresis, vocal fold immobility/hypomobility associated with mechanical impairment of the crico-arytenoid joint and vocal fold immobility/hypomobility related to laryngeal malignant disease. This represents the first rigorously defined vocal fold motion impairment nomenclature system. This provides detailed definitions to the terms vocal fold paralysis and vocal fold paresis. PMID:26036851

  12. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  13. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  14. TURTLE IN SPACE DESCRIBES NEW HUBBLE IMAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    NASA's Hubble Space Telescope has shown us that the shrouds of gas surrounding dying, sunlike stars (called planetary nebulae) come in a variety of strange shapes, from an 'hourglass' to a 'butterfly' to a 'stingray.' With this image of NGC 6210, the Hubble telescope has added another bizarre form to the rogues' gallery of planetary nebulae: a turtle swallowing a seashell. Giving this dying star such a weird name is less of a challenge than trying to figure out how dying stars create these unusual shapes. The larger image shows the entire nebula; the inset picture captures the complicated structure surrounding the dying star. The remarkable features of this nebula are the numerous holes in the inner shells with jets of material streaming from them. These jets produce column-shaped features that are mirrored in the opposite direction. The multiple shells of material ejected by the dying star give this planetary nebula its odd form. In the 'full nebula' image, the brighter central region looks like a 'nautilus shell'; the fainter outer structure (colored red) a 'tortoise.' The dying star is the white dot in the center. Both pictures are composite images based on observations taken Aug. 6, 1997 with the telescope's Wide Field and Planetary Camera 2. Material flung off by this central star is streaming out of holes it punched in the nautilus shell. At least four jets of material can be seen in the 'full nebula' image: a pair near 6 and 12 o'clock and another near 2 and 8 o'clock. In each pair, the jets are directly opposite each other, exemplifying their 'bipolar' nature. The jets are thought to be driven by a 'fast wind' - material propelled by radiation from the hot central star. In the inner 'nautilus' shell, bright rims outline the escape holes created by this 'wind,' such as the one at 2 o'clock. This same 'wind' appears to give rise to the prominent outer jet in the same direction. The hole in the inner shell acts like a hose nozzle, directing the flow of

  15. TURTLE IN SPACE DESCRIBES NEW HUBBLE IMAGE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    NASA's Hubble Space Telescope has shown us that the shrouds of gas surrounding dying, sunlike stars (called planetary nebulae) come in a variety of strange shapes, from an 'hourglass' to a 'butterfly' to a 'stingray.' With this image of NGC 6210, the Hubble telescope has added another bizarre form to the rogues' gallery of planetary nebulae: a turtle swallowing a seashell. Giving this dying star such a weird name is less of a challenge than trying to figure out how dying stars create these unusual shapes. The larger image shows the entire nebula; the inset picture captures the complicated structure surrounding the dying star. The remarkable features of this nebula are the numerous holes in the inner shells with jets of material streaming from them. These jets produce column-shaped features that are mirrored in the opposite direction. The multiple shells of material ejected by the dying star give this planetary nebula its odd form. In the 'full nebula' image, the brighter central region looks like a 'nautilus shell'; the fainter outer structure (colored red) a 'tortoise.' The dying star is the white dot in the center. Both pictures are composite images based on observations taken Aug. 6, 1997 with the telescope's Wide Field and Planetary Camera 2. Material flung off by this central star is streaming out of holes it punched in the nautilus shell. At least four jets of material can be seen in the 'full nebula' image: a pair near 6 and 12 o'clock and another near 2 and 8 o'clock. In each pair, the jets are directly opposite each other, exemplifying their 'bipolar' nature. The jets are thought to be driven by a 'fast wind' - material propelled by radiation from the hot central star. In the inner 'nautilus' shell, bright rims outline the escape holes created by this 'wind,' such as the one at 2 o'clock. This same 'wind' appears to give rise to the prominent outer jet in the same direction. The hole in the inner shell acts like a hose nozzle, directing the flow of

  16. Spatial-filter models to describe IC lithographic behavior

    NASA Astrophysics Data System (ADS)

    Stirniman, John P.; Rieger, Michael L.

    1997-07-01

    Proximity correction systems require an accurate, fast way to predict how a pattern configuration will transfer to the wafer. In this paper we present an efficient method for modeling the pattern transfer process based on Dennis Gabor's `theory of communication'. This method is based on a `convolution form' where any 2D transfer process can be modeled with a set of linear, 2D spatial filters, even when the transfer process is non-linear. We will show that this form is a general case from which other well-known process simulation models can be derived. Furthermore, we will demonstrate that the convolution form can be used to model observed phenomena, even when the physical mechanisms involved are unknown.

  17. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  4. Towards an accurate bioimpedance identification

    NASA Astrophysics Data System (ADS)

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  5. Describing variations of the Fisher-matrix across parameter space

    NASA Astrophysics Data System (ADS)

    Schäfer, Björn Malte; Reischke, Robert

    2016-08-01

    Forecasts in cosmology, both with Monte Carlo Markov-chain methods and with the Fisher-matrix formalism, depend on the choice of the fiducial model because both the signal strength of any observable and the model non-linearities linking observables to cosmological parameters vary in the general case. In this paper we propose a method for extrapolating Fisher-forecasts across the space of cosmological parameters by constructing a suitable basis. We demonstrate the validity of our method with constraints on a standard dark energy model extrapolated from a ΛCDM-model, as can be expected from two-bin weak lensing tomography with an Euclid-like survey, in the parameter pairs (Ωm, σ8), (Ωm, w0) and (w0, wa). Our numerical results include very accurate extrapolations across a wide range of cosmological parameters in terms of shape, size and orientation of the parameter likelihood, and a decomposition of the change of the likelihood contours into modes, which are straightforward to interpret in a geometrical way. We find that in particular the variation of the dark energy figure of merit is well captured by our formalism.

  6. A Self-Instructional Device for Conditioning Accurate Prosody.

    ERIC Educational Resources Information Center

    Buiten, Roger; Lane, Harlan

    1965-01-01

    A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…

  7. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    SciTech Connect

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify the

  8. Development and testing of conceptual models describing plutonium subsurface transport (Invited)

    NASA Astrophysics Data System (ADS)

    Powell, B. A.

    2009-12-01

    , a conceptual model describing the effects of plutonium redox cycling on subsurface transport was developed. This model accurately described downward movement of plutonium in a series of field lysimeters. Research related to these lysimeters has continued using a combination of long-term field observations, laboratory measurements, and computer modeling which have provided a unique and detailed conceptual and quantitative model describing plutonium subsurface transport. Field and laboratory experiments indicate that biogeochemical processes such as ligand complexation and redox cycling profoundly influence plutonium subsurface transport. This presentation will focus on laboratory efforts to develop these conceptual models and provide a quantitative framework for reactive transport modeling efforts.

  9. Diffusion model to describe osteogenesis within a porous titanium scaffold.

    PubMed

    Schmitt, M; Allena, R; Schouman, T; Frasca, S; Collombet, J M; Holy, X; Rouch, P

    2016-01-01

    In this study, we develop a two-dimensional finite element model, which is derived from an animal experiment and allows simulating osteogenesis within a porous titanium scaffold implanted in ewe's hemi-mandible during 12 weeks. The cell activity is described through diffusion equations and regulated by the stress state of the structure. We compare our model to (i) histological observations and (ii) experimental data obtained from a mechanical test done on sacrificed animal. We show that our mechano-biological approach provides consistent numerical results and constitutes a useful tool to predict osteogenesis pattern. PMID:25573031

  10. Quark and lepton mass matrices described by charged lepton masses

    NASA Astrophysics Data System (ADS)

    Koide, Yoshio; Nishiura, Hiroyuki

    2016-06-01

    Recently, we proposed a unified mass matrix model for quarks and leptons, in which, mass ratios and mixings of the quarks and neutrinos are described by using only the observed charged lepton mass values as family-number-dependent parameters and only six family-number-independent free parameters. In spite of quite few parameters, the model gives remarkable agreement with observed data (i.e. Cabibbo-Kobayashi-Maskawa (CKM) mixing, Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing and mass ratios). Taking this phenomenological success seriously, we give a formulation of the so-called Yukawaon model in detail from a theoretical aspect, especially for the construction of superpotentials and R charge assignments of fields. The model is considerably modified from the previous one, while the phenomenological success is kept unchanged.

  11. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  12. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  13. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  14. Accurate Measurements of the Local Deuterium Abundance from HST Spectra

    NASA Technical Reports Server (NTRS)

    Linsky, Jeffrey L.

    1996-01-01

    An accurate measurement of the primordial value of D/H would provide a critical test of nucleosynthesis models for the early universe and the baryon density. I briefly summarize the ongoing HST observations of the interstellar H and D Lyman-alpha absorption for lines of sight to nearby stars and comment on recent reports of extragalactic D/H measurements.

  15. Can the Non-linear Ballooning Model describe ELMs?

    NASA Astrophysics Data System (ADS)

    Henneberg, S. A.; Cowley, S. C.; Wilson, H. R.

    2015-11-01

    The explosive, filamentary plasma eruptions described by the non-linear ideal MHD ballooning model is tested quantitatively against experimental observations of ELMs in MAST. The equations describing this model were derived by Wilson and Cowley for tokamak-like geometry which includes two differential equations: the linear ballooning equation which describes the spatial distribution along the field lines and the non-linear ballooning mode envelope equation, which is a two-dimensional, non-linear differential equation which can involve fractional temporal-derivatives, but is often second-order in time and space. To employ the second differential equation for a specific geometry one has to evaluate the coefficients of the equation which is non-trivial as it involves field line averaging of slowly converging functions. We have solved this system for MAST, superimposing the solutions of both differential equations and mapping them onto a MAST plasma. Comparisons with the evolution of ELM filaments in MAST will be reported in order to test the model. The support of the EPSRC for the FCDT (Grant EP/K504178/1), of Euratom research and training programme 2014-2018 (No 633053) and of the RCUK Energy Programme [grant number EP/I501045] is gratefully acknowledged.

  16. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  17. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  18. Observer Use of Standardized Observation Protocols in Consequential Observation Systems

    ERIC Educational Resources Information Center

    Bell, Courtney A.; Yi, Qi; Jones, Nathan D.; Lewis, Jennifer M.; McLeod, Monica; Liu, Shuangshuang

    2014-01-01

    Evidence from a handful of large-scale studies suggests that although observers can be trained to score reliably using observation protocols, there are concerns related to initial training and calibration activities designed to keep observers scoring accurately over time (e.g., Bell, et al, 2012; BMGF, 2012). Studies offer little insight into how…

  19. Specific Heat Anomalies in Solids Described by a Multilevel Model

    NASA Astrophysics Data System (ADS)

    Souza, Mariano de; Paupitz, Ricardo; Seridonio, Antonio; Lagos, Roberto E.

    2016-04-01

    In the field of condensed matter physics, specific heat measurements can be considered as a pivotal experimental technique for characterizing the fundamental excitations involved in a certain phase transition. Indeed, phase transitions involving spin (de Souza et al. Phys. B Condens. Matter 404, 494 (2009) and Manna et al. Phys. Rev. Lett. 104, 016403 (2010)), charge (Pregelj et al. Phys. Rev. B 82, 144438 (2010)), lattice (Jesche et al. Phys. Rev. B 81, 134525 (2010)) (phonons) and orbital degrees of freedom, the interplay between ferromagnetism and superconductivity (Jesche et al. Phys. Rev. B 86, 020501 (2012)), Schottky-like anomalies in doped compounds (Lagos et al. Phys. C Supercond. 309, 170 (1998)), electronic levels in finite correlated systems (Macedo and Lagos J. Magn. Magn. Mater. 226, 105 (2001)), among other features, can be captured by means of high-resolution calorimetry. Furthermore, the entropy change associated with a first-order phase transition, no matter its nature, can be directly obtained upon integrating the specific heat over T, i.e., C( T)/ T, in the temperature range of interest. Here, we report on a detailed analysis of the two-peak specific heat anomalies observed in several materials. Employing a simple multilevel model, varying the spacing between the energy levels Δ i = ( E i - E 0) and the degeneracy of each energy level g i , we derive the required conditions for the appearance of such anomalies. Our findings indicate that a ratio of {Δ }2/{Δ }1thickapprox 10 between the energy levels and a high degeneracy of one of the energy levels define the two-peaks regime in the specific heat. Our approach accurately matches recent experimental results. Furthermore, using a mean-field approach, we calculate the specific heat of a degenerate Schottky-like system undergoing a ferromagnetic (FM) phase transition. Our results reveal that as the degeneracy is increased the Schottky maximum in the specific heat becomes narrow while the peak

  20. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  1. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  2. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  3. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  4. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    NASA Technical Reports Server (NTRS)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  5. Judgements about the relation between force and trajectory variables in verbally described ballistic projectile motion.

    PubMed

    White, Peter A

    2013-01-01

    How accurate are explicit judgements about familiar forms of object motion, and how are they made? Participants judged the relations between force exerted in kicking a soccer ball and variables that define the trajectory of the ball: launch angle, maximum height attained, and maximum distance reached. Judgements tended to conform to a simple heuristic that judged force tends to increase as maximum height and maximum distance increase, with launch angle not being influential. Support was also found for the converse prediction, that judged maximum height and distance tend to increase as the amount of force described in the kick increases. The observed judgemental tendencies did not resemble the objective relations, in which force is a function of interactions between the trajectory variables. This adds to a body of research indicating that practical knowledge based on experiences of actions on objects is not available to the processes that generate judgements in higher cognition and that such judgements are generated by simple rules that do not capture the objective interactions between the physical variables. PMID:23075337

  6. Quantization method for describing the motion of celestial systems

    NASA Astrophysics Data System (ADS)

    Christianto, Victor; Smarandache, Florentin

    2015-11-01

    Criticism arises concerning the use of quantization method for describing the motion of celestial systems, arguing that the method is oversimplifying the problem, and cannot explain other phenomena, for instance planetary migration. Using quantization method like Nottale-Schumacher did, one can expect to predict new exoplanets with remarkable result. The ``conventional'' theories explaining planetary migration normally use fluid theory involving diffusion process. Gibson have shown that these migration phenomena could be described via Navier-Stokes approach. Kiehn's argument was based on exact-mapping between Schrodinger equation and Navier-Stokes equations, while our method may be interpreted as an oversimplification of the real planetary migration process which took place sometime in the past, providing useful tool for prediction (e.g. other planetoids, which are likely to be observed in the near future, around 113.8AU and 137.7 AU). Therefore, quantization method could be seen as merely a ``plausible'' theory. We would like to emphasize that the quantization method does not have to be the true description of reality with regards to celestial phenomena. This method could explain some phenomena, while perhaps lacks explanation for other phenomena.

  7. Asphere, O asphere, how shall we describe thee?

    NASA Astrophysics Data System (ADS)

    Forbes, G. W.; Brophy, C. P.

    2008-09-01

    Two key criteria govern the characterization of nominal shapes for aspheric optical surfaces. An efficient representation describes the spectrum of relevant shapes to the required accuracy by using the fewest decimal digits in the associated coefficients. Also, a representation is more effective if it can, in some way, facilitate other processes - such as optical design, tolerancing, or direct human interpretation. With the development of better tools for their design, metrology, and fabrication, aspheric optics are becoming ever more pervasive. As part of this trend, aspheric departures of up to a thousand microns or more must be characterized at almost nanometre precision. For all but the simplest of shapes, this is not as easy as it might sound. Efficiency is therefore increasingly important. Further, metrology tools continue to be one of the weaker links in the cost-effective production of aspheric optics. Interferometry particularly struggles to deal with steep slopes in aspheric departure. Such observations motivated the ideas described in what follows for modifying the conventional description of rotationally symmetric aspheres to use orthogonal bases that boost efficiency. The new representations can facilitate surface tolerancing as well as the design of aspheres with cost-effective metrology options. These ideas enable the description of aspheric shapes in terms of decompositions that not only deliver improved efficiency and effectiveness, but that are also shown to admit direct interpretations. While it's neither poetry nor a cure-all, an old blight can be relieved.

  8. Accurate energy levels for singly ionized platinum (Pt II)

    NASA Technical Reports Server (NTRS)

    Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Engleman, Rolf, Jr.

    1988-01-01

    New observations of the spectrum of Pt II have been made with hollow-cathode lamps. The region from 1032 to 4101 A was observed photographically with a 10.7-m normal-incidence spectrograph. The region from 2245 to 5223 A was observed with a Fourier-transform spectrometer. Wavelength measurements were made for 558 lines. The uncertainties vary from 0.0005 to 0.004 A. From these measurements and three parity-forbidden transitions in the infrared, accurate values were determined for 28 even and 72 odd energy levels of Pt II.

  9. On the importance of having accurate data for astrophysical modelling

    NASA Astrophysics Data System (ADS)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  10. Heuristics for scheduling Earth observing satellites

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Sorensen, Stephen E.

    1999-09-01

    This paper describes several methods for assigning tasks to Earth Observing Systems Satellites (EOS). We present empirical results for three heuristics, called: Priority Dispatch (PD), Look Ahead (LA), and Genetic Algorithm (GA). These heuristics progress from simple to complex, from less accurate to more accurate, and from fast to slow. We present empirical results as applied to the Window-Constrained Packing problem (WCP). The WCP is a simplified version of the EOS scheduling problem. We discuss the problem of having more than one optimization criteria. We will also discuss the relationship between the WCP and the more traditional Knapsack and Weighted Early/Tardy problems.

  11. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  12. An accurate geometric distance to the compact binary SS Cygni vindicates accretion disc theory.

    PubMed

    Miller-Jones, J C A; Sivakoff, G R; Knigge, C; Körding, E G; Templeton, M; Waagen, E O

    2013-05-24

    Dwarf novae are white dwarfs accreting matter from a nearby red dwarf companion. Their regular outbursts are explained by a thermal-viscous instability in the accretion disc, described by the disc instability model that has since been successfully extended to other accreting systems. However, the prototypical dwarf nova, SS Cygni, presents a major challenge to our understanding of accretion disc theory. At the distance of 159 ± 12 parsecs measured by the Hubble Space Telescope, it is too luminous to be undergoing the observed regular outbursts. Using very long baseline interferometric radio observations, we report an accurate, model-independent distance to SS Cygni that places the source substantially closer at 114 ± 2 parsecs. This reconciles the source behavior with our understanding of accretion disc theory in accreting compact objects. PMID:23704566

  13. Babylonian observations

    NASA Astrophysics Data System (ADS)

    Brown, D.

    Very few cuneiform records survive from Mesopotamia of datable astronomical observations made prior to the mid-eighth century BC. Those that do record occasional eclipses, and in one isolated case the dates of the heliacal rising and setting of Venus over a few years sometime in the first half of the second millennium BC. After the mid-eighth century BC the situation changes dramatically. Incomplete records of daily observations of astronomical and meteorological events are preserved from c. 747 BC until the Christian Period. These records are without accompanying ominous interpretation, although it is highly probable that they were compiled by diviners for astrological purposes. They include numerous observations of use to historical astronomers, such as the times of eclipses and occultations, and the dates of comet appearances and meteor showers. The question arises as to why such records do not survive from earlier times; celestial divination was employed as far back as the third millenium BC. It is surely not without importance that the earliest known accurate astronomical predictions accompany the later records, and that the mid-eighth century BC ushered in a period of centralised Assyrian control of Mesopotamia and the concomitant employment by the Assyrian ruler of large numbers of professional celestial diviners. The programme of daily observations evidently began when a high premium was first set on the accurate astronomical prediction of ominous events. It is in this light that we must approach this valuable source material for historical astronomy.

  14. Observation Station

    ERIC Educational Resources Information Center

    Rutherford, Heather

    2011-01-01

    This article describes how a teacher integrates science observations into the writing center. At the observation station, students explore new items with a science theme and use their notes and questions for class writings every day. Students are exposed to a variety of different topics and motivated to write in different styles all while…

  15. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  16. Issues with Describing the Uncertainties in Atmospheric Remote Sensing Measurements

    NASA Astrophysics Data System (ADS)

    Haffner, D. P.; Bhartia, P. K.; Kramarova, N. A.

    2014-12-01

    Uncertainty in atmospheric measurements from satellites and other remote sensing platforms comes from several sources. Users are familiar with concepts of accuracy and precision for physical measurements made using instrumentation, but retrieval algorithms also frequently require statistical information since measurements alone may not completely determine the parameter of interest. This statistical information has uncertainty associated with it as well, and it often contributes a sizeable fraction to the total uncertainty. The precise combination of physical and statistical information in remotely sensed data can vary with season, latitude, altitude, and conditions of measurement. While this picture is complex, it is important to clearly define the overall uncertainty for users without oversimplifying so they can interpret the data correctly. Assessment of trends, quantification of radiative forcing and chemical budgets, and comparisons of models with satellite observations all benefit from having adequate uncertainty information. But even today, terminology and interpretation of these uncertainties is a hot topic of discussion among experts. Based on our experience producing a 44 year-long dataset of total ozone and ozone profiles, we discuss our ideas for describing uncertainty in atmospheric datasets for global change research. Assumptions about the atmosphere used in retrievals can also be provided with exact information detailing how the final product depends on these assumptions. As a practical example, we discuss our modifications to the Total Ozone Mapping Spectrometer (TOMS) algorithm in Version 9 to provide robust uncertainties for each measurement and supply as much useful information to users as possible. Finally, we describe how uncertainties in individual measurements combine when the data are aggregated in time and space.

  17. A six-parameter space to describe galaxy diversification

    NASA Astrophysics Data System (ADS)

    Fraix-Burnet, D.; Chattopadhyay, T.; Chattopadhyay, A. K.; Davoust, E.; Thuillard, M.

    2012-09-01

    Context. The diversification of galaxies is caused by transforming events such as accretion, interaction, or mergers. These explain the formation and evolution of galaxies, which can now be described by many observables. Multivariate analyses are the obvious tools to tackle the available datasets and understand the differences between different kinds of objects. However, depending on the method used, redundancies, incompatibilities, or subjective choices of the parameters can diminish the usefulness of these analyses. The behaviour of the available parameters should be analysed before any objective reduction in the dimensionality and any subsequent clustering analyses can be undertaken, especially in an evolutionary context. Aims: We study a sample of 424 early-type galaxies described by 25 parameters, 10 of which are Lick indices, to identify the most discriminant parameters and construct an evolutionary classification of these objects. Methods: Four independent statistical methods are used to investigate the discriminant properties of the observables and the partitioning of the 424 galaxies: principal component analysis, K-means cluster analysis, minimum contradiction analysis, and Cladistics. Results: The methods agree in terms of six parameters: central velocity dispersion, disc-to-bulge ratio, effective surface brightness, metallicity, and the line indices NaD and OIII. The partitioning found using these six parameters, when projected onto the fundamental plane, looks very similar to the partitioning obtained previously for a totally different sample and based only on the parameters of the fundamental plane. Two additional groups are identified here, and we are able to provide some more constraints on the assembly history of galaxies within each group thanks to the larger number of parameters. We also identify another "fundamental plane" with the absolute K magnitude, the linear diameter, and the Lick index Hβ. We confirm that the Mg b vs. velocity dispersion

  18. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  19. Inference of random walk models to describe leukocyte migration

    NASA Astrophysics Data System (ADS)

    Jones, Phoebe J. M.; Sim, Aaron; Taylor, Harriet B.; Bugeon, Laurence; Dallman, Magaret J.; Pereira, Bernard; Stumpf, Michael P. H.; Liepe, Juliane

    2015-12-01

    While the majority of cells in an organism are static and remain relatively immobile in their tissue, migrating cells occur commonly during developmental processes and are crucial for a functioning immune response. The mode of migration has been described in terms of various types of random walks. To understand the details of the migratory behaviour we rely on mathematical models and their calibration to experimental data. Here we propose an approximate Bayesian inference scheme to calibrate a class of random walk models characterized by a specific, parametric particle re-orientation mechanism to observed trajectory data. We elaborate the concept of transition matrices (TMs) to detect random walk patterns and determine a statistic to quantify these TM to make them applicable for inference schemes. We apply the developed pipeline to in vivo trajectory data of macrophages and neutrophils, extracted from zebrafish that had undergone tail transection. We find that macrophage and neutrophils exhibit very distinct biased persistent random walk patterns, where the strengths of the persistence and bias are spatio-temporally regulated. Furthermore, the movement of macrophages is far less persistent than that of neutrophils in response to wounding.

  20. Inference of random walk models to describe leukocyte migration.

    PubMed

    Jones, Phoebe J M; Sim, Aaron; Taylor, Harriet B; Bugeon, Laurence; Dallman, Magaret J; Pereira, Bernard; Stumpf, Michael P H; Liepe, Juliane

    2015-12-01

    While the majority of cells in an organism are static and remain relatively immobile in their tissue, migrating cells occur commonly during developmental processes and are crucial for a functioning immune response. The mode of migration has been described in terms of various types of random walks. To understand the details of the migratory behaviour we rely on mathematical models and their calibration to experimental data. Here we propose an approximate Bayesian inference scheme to calibrate a class of random walk models characterized by a specific, parametric particle re-orientation mechanism to observed trajectory data. We elaborate the concept of transition matrices (TMs) to detect random walk patterns and determine a statistic to quantify these TM to make them applicable for inference schemes. We apply the developed pipeline to in vivo trajectory data of macrophages and neutrophils, extracted from zebrafish that had undergone tail transection. We find that macrophage and neutrophils exhibit very distinct biased persistent random walk patterns, where the strengths of the persistence and bias are spatio-temporally regulated. Furthermore, the movement of macrophages is far less persistent than that of neutrophils in response to wounding. PMID:26403334

  1. A Method for Accurate in silico modeling of Ultrasound Transducer Arrays

    PubMed Central

    Guenther, Drake A.; Walker, William F.

    2009-01-01

    This paper presents a new approach to improve the in silico modeling of ultrasound transducer arrays. While current simulation tools accurately predict the theoretical element spatio-temporal pressure response, transducers do not always behave as theorized. In practice, using the probe's physical dimensions and published specifications in silico, often results in unsatisfactory agreement between simulation and experiment. We describe a general optimization procedure used to maximize the correlation between the observed and simulated spatio-temporal response of a pulsed single element in a commercial ultrasound probe. A linear systems approach is employed to model element angular sensitivity, lens effects, and diffraction phenomena. A numerical deconvolution method is described to characterize the intrinsic electro-mechanical impulse response of the element. Once the response of the element and optimal element characteristics are known, prediction of the pressure response for arbitrary apertures and excitation signals is performed through direct convolution using available tools. We achieve a correlation of 0.846 between the experimental emitted waveform and simulated waveform when using the probe's physical specifications in silico. A far superior correlation of 0.988 is achieved when using the optimized in silico model. Electronic noise appears to be the main effect preventing the realization of higher correlation coefficients. More accurate in silico modeling will improve the evaluation and design of ultrasound transducers as well as aid in the development of sophisticated beamforming strategies. PMID:19041997

  2. Accurate nuclear radii and binding energies from a chiral interaction

    DOE PAGESBeta

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shellmore » nuclei are in reasonable agreement with experiment.« less

  3. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  4. Accurate nuclear radii and binding energies from a chiral interaction

    SciTech Connect

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  5. Optical Chopper Assembly for the Mars Observer

    NASA Technical Reports Server (NTRS)

    Allen, Terry

    1993-01-01

    This paper describes the Honeywell-developed Optical Chopper Assembly (OCA), a component of Mars Observer spacecraft's Pressure Modulator Infrared Radiometer (PMIRR) science experiment, which will map the Martian atmosphere during 1993 to 1995. The OCA is unique because of its constant accurate rotational speed, low electrical power consumption, and long-life requirements. These strict and demanding requirements were achieved by use of a number of novel approaches.

  6. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  7. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  8. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  9. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  10. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  11. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  12. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  13. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  14. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  15. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  16. Describing phase coexistence in systems with small phases

    NASA Astrophysics Data System (ADS)

    Lovett, R.

    2007-02-01

    Clusters of atoms can be studied in molecular beams and by computer simulation; 'liquid drops' provide elementary models for atomic nuclei and for the critical nuclei of nucleation theory. These clusters are often described in thermodynamic terms, but the behaviour of small clusters near a phase boundary is qualitatively different from the behaviour at a first order phase transition in idealized thermodynamics. In the idealized case the density and entropy show mathematically sharp discontinuities when the phase boundary is crossed. In large, but finite, systems, the phase boundaries become regions of state space wherein these properties vary rapidly but continuously. In small clusters with a large surface/volume ratio, however, the positive interfacial free energy makes it unlikely, even in states on phase boundaries, that a cluster will have a heterogeneous structure. What is actually seen in these states is a structure that fluctuates in time between homogeneous structures characteristic of the two sides of the phase boundary. That is, structural fluctuations are observed. Thermodynamics only predicts average properties; statistical mechanics is required to understand these fluctuations. Failure to distinguish thermodynamic properties and characterizations of fluctuations, particularly in the context of first order phase transitions, has led to suggestions that the classical rules for thermodynamic stability are violated in small systems and that classical thermodynamics provides an inconsistent description of these systems. Much of the confusion stems from taking statistical mechanical identifications of thermodynamic properties, explicitly developed for large systems, and applying them uncritically to small systems. There are no inconsistencies if thermodynamic properties are correctly identified and the distinction between thermodynamic properties and fluctuations is made clear.

  17. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  18. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  19. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  20. New Techniques and Metrics for Describing Rivers Using High Resolution Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Bailey, P.; McKean, J. A.; Poulsen, F.; Ochoski, N.; Wheaton, J. M.

    2013-12-01

    Techniques for collecting high resolution digital elevation models (DEMs) of fluvial environments are cheaper and more widely accessible than ever before. These DEMs improve over traditional transect-based approaches because they represent the channel bed as a continuous surface. Advantages beyond the obvious more accurate representations of channel area and volume include the three dimensional representation of geomorphic features that directly influence the behavior of river organisms. It is possible to identify many of these habitats using topography alone, but when combined with the spatial arrangement of these areas within the channel, a more holistic view of biologic existence can be gleaned from the three dimensional representation of the channel. We present a new approach for measuring and describing channels that leverages the continuous nature of digital elevation model surfaces. Delivered via the River Bathymetry Toolkit (RBT) this approach is capable of not only reproducing the traditional transect-based metrics, but also includes novel techniques for generating stage independent channel measurements, regardless of the flow that occurred at the time of data capture. The RBT also possesses the capability of measuring changes over time, accounting for uncertainty using approaches adopted from the Geomorphic Change Detection (GCD) literature and producing maps and metrics for erosion and deposition. This new approach is available via the River Bathymetry Toolit that is structured to enable repeat systematic measurements over an unlimited number of sites. We present how this approach has been applied to over 500 sites in the Pacific Northwest as part of the Columbia Habitat Mapping Program (CHaMP). We demonstrate the new channel metrics for a range of these sites, both at the observed and simulated flows as well as examples of changes in channel morphology over time. We present an analysis comparing these new metrics against traditional transect based

  1. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  2. Towards a Density Functional Theory Exchange-Correlation Functional able to describe localization/delocalization

    NASA Astrophysics Data System (ADS)

    Mattsson, Ann E.; Wills, John M.

    2013-03-01

    The inability to computationally describe the physics governing the properties of actinides and their alloys is the poster child of failure of existing Density Functional Theory exchange-correlation functionals. The intricate competition between localization and delocalization of the electrons, present in these materials, exposes the limitations of functionals only designed to properly describe one or the other situation. We will discuss the manifestation of this competition in real materials and propositions on how to construct a functional able to accurately describe properties of these materials. I addition we will discuss both the importance of using the Dirac equation to describe the relativistic effects in these materials, and the connection to the physics of transition metal oxides. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  4. Damage and fatigue described by a fractional derivative model

    NASA Astrophysics Data System (ADS)

    Caputo, Michele; Fabrizio, Mauro

    2015-07-01

    As in [1], damage is associated with fatigue that a material undergoes. In this paper, because we work with viscoelastic solids represented by a fractional model, damage is described by the order of the fractional derivative, which represents the phase field satisfying Ginzburg-Landau equation, which describes the evolution of damage. Finally, in our model, damage is caused, not only by fatigue, but also directly by a source related to environmental factors and described by a positive time function.

  5. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  6. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  7. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  8. Describing Communicative Functions in a First Grade Classroom.

    ERIC Educational Resources Information Center

    Wrather, Nancy

    The purpose of this study was to synthesize a category system for observation of communicative functions in children's speech and to test that category system by recording observations of interactions within a first-grade classroom. The observation system which was designed attempts to account for all factors of a communication situation and to…

  9. Describing function theory as applied to thermal and neutronic problems

    SciTech Connect

    Nassersharif, B.

    1983-01-01

    Describing functions have traditionally been used to obtain the solutions of systems of ordinary differential equations. In this work the describing function concept has been extended to include nonlinear, distributed parameter partial differential equations. A three-stage solution algorithm is presented which can be applied to any nonlinear partial differential equation. Two generalized integral transforms were developed as the T-transform for the time domain and the B-transform for the spatial domain. The thermal diffusion describing function (TDDF) is developed for conduction of heat in solids and a general iterative solution along with convergence criteria is presented. The proposed solution method is used to solve the problem of heat transfer in nuclear fuel rods with annular fuel pellets. As a special instance the solid cylindrical fuel pellet is examined. A computer program is written which uses the describing function concept for computing fuel pin temperatures in the radial direction during reactor transients. The second problem investigated was the neutron diffusion equation which is intrinsically different from the first case. Although, for most situations, it can be treated as a linear differential equation, the describing function method is still applicable. A describing function solution is derived for two possible cases: constant diffusion coefficient and variable diffusion coefficient. Two classes of describing functions are defined for each case which portray the leakage and absorption phenomena. For the specific case of a slab reactor criticality problem the comparison between analytical and describing function solutions revealed an excellent agreement.

  10. Some computational techniques for estimating human operator describing functions

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1986-01-01

    Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.

  11. Continuous and discrete describing function analysis of the LST system

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.; Yackel, R. A.

    1973-01-01

    A describing function of the control moment gyros (CMG) frictional nonlinearity is derived using the analytic torque equation. Computer simulation of the simplified Large Space Telescope (LST) system with the analytic torque expression is discussed along with the transfer functions of the sampled-data LST system, and the discrete describing function of the GMC frictionality.

  12. A Fully Implicit Time Accurate Method for Hypersonic Combustion: Application to Shock-induced Combustion Instability

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Radhakrishnan, Krishnan

    1994-01-01

    A new fully implicit, time accurate algorithm suitable for chemically reacting, viscous flows in the transonic-to-hypersonic regime is described. The method is based on a class of Total Variation Diminishing (TVD) schemes and uses successive Gauss-Siedel relaxation sweeps. The inversion of large matrices is avoided by partitioning the system into reacting and nonreacting parts, but still maintaining a fully coupled interaction. As a result, the matrices that have to be inverted are of the same size as those obtained with the commonly used point implicit methods. In this paper we illustrate the applicability of the new algorithm to hypervelocity unsteady combustion applications. We present a series of numerical simulations of the periodic combustion instabilities observed in ballistic-range experiments of blunt projectiles flying at subdetonative speeds through hydrogen-air mixtures. The computed frequencies of oscillation are in excellent agreement with experimental data.

  13. Sinusoidal input describing function for hysteresis followed by elementary backlash

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.

    1976-01-01

    The author proposes a new sinusoidal input describing function which accounts for the serial combination of hysteresis followed by elementary backlash in a single nonlinear element. The output of the hysteresis element drives the elementary backlash element. Various analytical forms of the describing function are given, depending on the a/A ratio, where a is the half width of the hysteresis band or backlash gap, and A is the amplitude of the assumed input sinusoid, and on the value of the parameter representing the fraction of a attributed to the backlash characteristic. The negative inverse describing function is plotted on a gain-phase plot, and it is seen that a relatively small amount of backlash leads to domination of the backlash character in the describing function. The extent of the region of the gain-phase plane covered by the describing function is such as to guarantee some form of limit cycle behavior in most closed-loop systems.

  14. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  15. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  16. Are Kohn-Sham conductances accurate?

    PubMed

    Mera, H; Niquet, Y M

    2010-11-19

    We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333

  17. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  18. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  19. Describing Willow Flycatcher habitats: scale perspectives and gender differences

    USGS Publications Warehouse

    Sedgwick, James A.; Knopf, Fritz L.

    1992-01-01

    We compared habitat characteristics of nest sites (female-selected sites) and song perch sites (male-selected sites) with those of sites unused by Willow Flycatchers (Empidonax traillii) at three different scales of vegetation measurement: (1) microplot (central willow [Salix spp.] bush and four adjacent bushes); (2) mesoplot (0.07 ha); and, (3) macroplot (flycatcher territory size). Willow Flycatchers exhibited vegetation preferences at all three scales. Nest sites were distinguished by high willow density and low variability in willow patch size and bush height. Song perch sites were characterized by large central shrubs, low central shrub vigor, and high variability in shrub size. Unused sites were characterized by greater distances between willows and willow patches, less willow coverage, and a smaller riparian zone width than either nest or song perch sites. At all scales, nest sites were situated farther from unused sites in multivariate habitat space than were song perch sites, suggesting (1) a correspondence among scales in their ability to describe Willow Flycatcher habitat, and (2) females are more discriminating in habitat selection than males. Microhabitat differences between male-selected (song perch) and female-selected (nest) sites were evident at the two smaller scales; at the finest scale, the segregation in habitat space between male-selected and female-selected sites was greater than that between male-selected and unused sites. Differences between song perch and nest sites were not apparent at the scale of flycatcher territory size, possibly due to inclusion of (1) both nest and song perch sites, (2) defended, but unused habitat, and/or (3) habitat outside of the territory, in larger scale analyses. The differences between nest and song perch sites at the finer scales reflect their different functions (e.g., nest concealment and microclimatic requirements vs. advertising and territorial defense, respectively), and suggest that the exclusive use

  20. Enhanced ocean observational capability

    SciTech Connect

    Volpe, A M; Esser, B K

    2000-01-10

    Coastal oceans are vital to world health and sustenance. Technology that enables new observations has always been the driver of discovery in ocean sciences. In this context, we describe the first at sea deployment and operation of an inductively coupled plasma mass spectrometer (ICPMS) for continuous measurement of trace elements in seawater. The purpose of these experiments was to demonstrate that an ICPMS could be operated in a corrosive and high vibration environment with no degradation in performance. Significant advances occurred this past year due to ship time provided by Scripps Institution of Oceanography (UCSD), as well as that funded through this project. Evaluation at sea involved performance testing and characterization of several real-time seawater analysis modes. We show that mass spectrometers can rapidly, precisely and accurately determine ultratrace metal concentrations in seawater, thus allowing high-resolution mapping of large areas of surface seawater. This analytical capability represents a significant advance toward real-time observation and understanding of water mass chemistry in dynamic coastal environments. In addition, a joint LLNL-SIO workshop was convened to define and design new technologies for ocean observation. Finally, collaborative efforts were initiated with atmospheric scientists at LLNL to identify realistic coastal ocean and river simulation models to support real-time analysis and modeling of hazardous material releases in coastal waterways.

  1. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  2. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  3. A spectrally accurate algorithm for electromagnetic scattering in three dimensions

    NASA Astrophysics Data System (ADS)

    Ganesh, M.; Hawkins, S.

    2006-09-01

    In this work we develop, implement and analyze a high-order spectrally accurate algorithm for computation of the echo area, and monostatic and bistatic radar cross-section (RCS) of a three dimensional perfectly conducting obstacle through simulation of the time-harmonic electromagnetic waves scattered by the conductor. Our scheme is based on a modified boundary integral formulation (of the Maxwell equations) that is tolerant to basis functions that are not tangential on the conductor surface. We test our algorithm with extensive computational experiments using a variety of three dimensional perfect conductors described in spherical coordinates, including benchmark radar targets such as the metallic NASA almond and ogive. The monostatic RCS measurements for non-convex conductors require hundreds of incident waves (boundary conditions). We demonstrate that the monostatic RCS of small (to medium) sized conductors can be computed using over one thousand incident waves within a few minutes (to a few hours) of CPU time. We compare our results with those obtained using method of moments based industrial standard three dimensional electromagnetic codes CARLOS, CICERO, FE-IE, FERM, and FISC. Finally, we prove the spectrally accurate convergence of our algorithm for computing the surface current, far-field, and RCS values of a class of conductors described globally in spherical coordinates.

  4. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  5. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  6. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  7. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  8. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  9. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  10. Describing Simple Data Access Services Version 1.0

    NASA Astrophysics Data System (ADS)

    Plante, Raymond; Delago, Jesus; Harrison, Paul; Tody, Doug; IVOA Registry Working Group; Plante, Raymond

    2013-11-01

    An application that queries or consumes descriptions of VO resources must be able to recognize a resource's support for standard IVOA protocols. This specification describes how to describe a service that supports any of the four fundamental data access protocols Simple Cone Search (SCS), Simple Image Access (SIA), Simple Spectral Access (SSA), Simple Line Access (SLA) using the VOResource XML encoding standard. A key part of this specification is the set of VOResource XML extension schemas that define new metadata that are specific to those protocols. This document describes in particular rules for describing such services within the context of IVOA Registries and data discovery as well as the VO Standard Interface (VOSI) and service selfdescription. In particular, this document spells out the essential markup needed to identify support for a standard protocol and the base URL required to access the interface that supports that protocol.

  11. On three new Orchestina species (Araneae: Oonopidae) described from China.

    PubMed

    Liu, Keke; Xiao, Yonghong; Xu, Xiang

    2016-01-01

    Three new species of oonopid spider from China are diagnosed, described and illustrated: Orchestina apiculata sp. nov. from Hunan, O. bialata sp. nov. and O. multipunctata sp. nov. from Jiangxi. The total number of the known species of Orchestina from China rises to 11 with the addition of three new species described in the present paper. Relationships with Asian and Afrotropical representatives are discussed. PMID:27395233

  12. Arab observations

    NASA Astrophysics Data System (ADS)

    Fatoohi, L. J.

    There are two main medieval Arab sources of astronomical observations: chronicles and astronomical treatises. Medieval Arabs produced numerous chronicles many of which reported astronomical events that the chroniclers themselves observed or were witnessed by others. Astronomical phenomena that were recorded by chroniclers include solar and lunar eclipses, cometary apparitions, meteors, and meteor showers. Muslim astronomers produced many astronomical treatises known as zijes. Zijes include records of mainly predictable phenomena, such as eclipses of the Sun and Moon. Unlike chronicles, zijes usually ignore irregular phenomena such as the apparitions of comets and meteors, and meteor showers. Some zijes include astronomical observations, especially of eclipses. Not unexpectedly, records in zijes are in general more accurate than their counterparts in chronicles. However, research has shown that medieval Arab chronicles and zijes both contain some valuable astronomical observational data. Unfortunately, much of the heritage of medieval Arab chroniclers and astronomers is still in manuscript form. Moreover, most of the huge numbers of Arabic manuscripts that exist in various libraries, especially in Arab countries, are still uncatalogued. Until now there is only one catalogue of zijes which was compiled in the fifties and which includes brief comments on 200 zijes. There is a real need for systematic investigation of medieval Arab historical and astronomical manuscripts which exist in many libraries all over the world.

  13. Describing function method applied to solution of nonlinear heat conduction equation

    SciTech Connect

    Nassersharif, B. )

    1989-08-01

    Describing functions have traditionally been used to obtain the solutions of systems of ordinary differential equations. The describing function concept has been extended to include the non-linear, distributed parameter solid heat conduction equation. A four-step solution algorithm is presented that may be applicable to many classes of nonlinear partial differential equations. As a specific application of the solution technique, the one-dimensional nonlinear transient heat conduction equation in an annular fuel pin is considered. A computer program was written to calculate one-dimensional transient heat conduction in annular cylindrical geometry. It is found that the quasi-linearization used in the describing function method is as accurate as and faster than true linearization methods.

  14. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  15. Accurate Transposable Element Annotation Is Vital When Analyzing New Genome Assemblies

    PubMed Central

    Platt, Roy N.; Blanco-Berdugo, Laura; Ray, David A.

    2016-01-01

    Transposable elements (TEs) are mobile genetic elements with the ability to replicate themselves throughout the host genome. In some taxa TEs reach copy numbers in hundreds of thousands and can occupy more than half of the genome. The increasing number of reference genomes from nonmodel species has begun to outpace efforts to identify and annotate TE content and methods that are used vary significantly between projects. Here, we demonstrate variation that arises in TE annotations when less than optimal methods are used. We found that across a variety of taxa, the ability to accurately identify TEs based solely on homology decreased as the phylogenetic distance between the queried genome and a reference increased. Next we annotated repeats using homology alone, as is often the case in new genome analyses, and a combination of homology and de novo methods as well as an additional manual curation step. Reannotation using these methods identified a substantial number of new TE subfamilies in previously characterized genomes, recognized a higher proportion of the genome as repetitive, and decreased the average genetic distance within TE families, implying recent TE accumulation. Finally, these finding—increased recognition of younger TEs—were confirmed via an analysis of the postman butterfly (Heliconius melpomene). These observations imply that complete TE annotation relies on a combination of homology and de novo–based repeat identification, manual curation, and classification and that relying on simple, homology-based methods is insufficient to accurately describe the TE landscape of a newly sequenced genome. PMID:26802115

  16. Evaluation of equations for describing the human crystalline lens

    NASA Astrophysics Data System (ADS)

    Giovanzana, Stefano; Schachar, Ronald A.; Talu, Stefan; Kirby, Roger D.; Yan, Eric; Pierscionek, Barbara K.

    2013-03-01

    Accurate mathematical descriptions of the human crystalline lens surface shape are required to properly understand the nature of functional adaptations that occur when the lens shape alters to changes in refractive power. Using least squares method, the total mean normal distance, smoothness, rate of change of the transverse and sagittal radii of curvatures and continuity at the lens equator between eight mathematical functions: conic, figuring conicoid, generalized conic, Hermans conic patch, Urs polynomial, Urs 10th order Fourier series, Chien, and Giovanzana, and 17 human crystalline lenses were evaluated. The mean differences of the fits of all the equations to the whole lens and to the central 8 mm of the lens surfaces were >24 μm with comparable standard deviations. When considering fit smoothness and continuity at the equator, the Giovanzana and Chien functions are most representative of the lens surface.

  17. Accurate ab initio Quartic Force Fields of Cyclic and Bent HC2N Isomers

    NASA Technical Reports Server (NTRS)

    Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J.

    2012-01-01

    Highly correlated ab initio quartic force field (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC2N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted CCSD(T). Dunning s correlation-consistent basis sets cc-pVXZ, X=3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schr dinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by perturbation theory). On the other hand, this procedure (a QFF together with either perturbation theory or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC2N. All three isomers possess significant dipole moments, 3.05D, 3.06D, and 1.71D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.

  18. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  19. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  20. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  1. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  2. The thermodynamic cost of accurate sensory adaptation

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2015-03-01

    Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.

  3. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  4. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  5. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  6. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  7. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  8. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  9. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  10. Describing small-scale structure in random media using pulse-echo ultrasound

    PubMed Central

    Insana, Michael F.; Wagner, Robert F.; Brown, David G.; Hall, Timothy J.

    2009-01-01

    A method for estimating structural properties of random media is described. The size, number density, and scattering strength of particles are estimated from an analysis of the radio frequency (rf) echo signal power spectrum. Simple correlation functions and the accurate scattering theory of Faran [J. J. Faran, J. Acoust. Soc. Am. 23, 405–418 (1951)], which includes the effects of shear waves, were used separately to model backscatter from spherical particles and thereby describe the structures of the medium. These methods were tested using both glass sphere-in-agar and polystyrene sphere-in-agar scattering media. With the appropriate correlation function, it was possible to measure glass sphere diameters with an accuracy of 20%. It was not possible to accurately estimate the size of polystyrene spheres with the simple spherical and Gaussian correlation models examined because of a significant shear wave contribution. Using the Faran scattering theory for spheres, however, the accuracy for estimating diameters was improved to 10% for both glass and polystyrene scattering media. It was possible to estimate the product of the average scattering particle number density and the average scattering strength per particle, but with lower accuracy than the size estimates. The dependence of the measurement accuracy on the inclusion of shear waves, the wavelength of sound, and medium attenuation are considered, and the implications for describing the structure of biological soft tissues are discussed. PMID:2299033

  11. Using Neural Networks to Describe Complex Phase Transformation Behavior

    SciTech Connect

    Vitek, J.M.; David, S.A.

    1999-05-24

    Final microstructures can often be the end result of a complex sequence of phase transformations. Fundamental analyses may be used to model various stages of the overall behavior but they are often impractical or cumbersome when considering multicomponent systems covering a wide range of compositions. Neural network analysis may be a useful alternative method of identifying and describing phase transformation beavior. A neural network model for ferrite prediction in stainless steel welds is described. It is shown that the neural network analysis provides valuable information that accounts for alloying element interactions. It is suggested that neural network analysis may be extremely useful for analysis when more fundamental approaches are unavailable or overly burdensome.

  12. Model framework for describing the dynamics of evolving networks

    NASA Astrophysics Data System (ADS)

    Tobochnik, Jan; Strandburg, Katherine; Csardi, Gabor; Erdi, Peter

    2007-03-01

    We present a model framework for describing the dynamics of evolving networks. In this framework the addition of edges is stochastically governed by some important intrinsic and structural properties of network vertices through an attractiveness function. We discuss the solution of the inverse problem: determining the attractiveness function from the network evolution data. We also present a number of example applications: the description of the US patent citation network using vertex degree, patent age and patent category variables, and we show how the time-dependent version of the method can be used to find and describe important changes in the internal dynamics. We also compare our results to scientific citation networks.

  13. Stability of interconnected dynamical systems described on Banach spaces

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. D.; Michel, A. N.

    1976-01-01

    New stability results for a large class of interconnected dynamical systems (also called composite systems or large scale systems) described on Banach spaces are established. In the present approach, the objective is always the same: to analyze large scale systems in terms of their lower order and simpler subsystems and in terms of their interconnecting structure. The present results provide a systematic procedure of analyzing hybrid dynamical systems (i.e., systems that are described by a mixture of different types of equations). To demonstrate the method of analysis advanced, two specific examples are considered.

  14. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  15. Recursive analytical solution describing artificial satellite motion perturbed by an arbitrary number of zonal terms

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An analytical first order solution has been developed which describes the motion of an artificial satellite perturbed by an arbitrary number of zonal harmonics of the geopotential. A set of recursive relations for the solution, which was deduced from recursive relations of the geopotential, was derived. The method of solution is based on Von-Zeipel's technique applied to a canonical set of two-body elements in the extended phase space which incorporates the true anomaly as a canonical element. The elements are of Poincare type, that is, they are regular for vanishing eccentricities and inclinations. Numerical results show that this solution is accurate to within a few meters after 500 revolutions.

  16. Accurate tremor locations from coherent S and P waves

    NASA Astrophysics Data System (ADS)

    Armbruster, John G.; Kim, Won-Young; Rubin, Allan M.

    2014-06-01

    Nonvolcanic tremor is an important component of the slow slip processes which load faults from below, but accurately locating tremor has proven difficult because tremor rarely contains clear P or S wave arrivals. Here we report the observation of coherence in the shear and compressional waves of tremor at widely separated stations which allows us to detect and accurately locate tremor events. An event detector using data from two stations sees the onset of tremor activity in the Cascadia tremor episodes of February 2003, July 2004, and September 2005 and confirms the previously reported south to north migration of the tremor. Event detectors using data from three and four stations give Sand P arrival times of high accuracy. The hypocenters of the tremor events fall at depths of ˜30 to ˜40 km and define a narrow plane dipping at a shallow angle to the northeast, consistent with the subducting plate interface. The S wave polarizations and P wave first motions define a source mechanism in agreement with the northeast convergence seen in geodetic observations of slow slip. Tens of thousands of locations determined by constraining the events to the plate interface show tremor sources highly clustered in space with a strongly similar pattern of sources in the three episodes examined. The deeper sources generate tremor in minor episodes as well. The extent to which the narrow bands of tremor sources overlap between the three major episodes suggests relative epicentral location errors as small as 1-2 km.

  17. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  18. THE FIRST ACCURATE PARALLAX DISTANCE TO A BLACK HOLE

    SciTech Connect

    Miller-Jones, J. C. A.; Jonker, P. G.; Dhawan, V.; Brisken, W.; Rupen, M. P.; Nelemans, G.; Gallo, E.

    2009-12-01

    Using astrometric VLBI observations, we have determined the parallax of the black hole X-ray binary V404 Cyg to be 0.418 +- 0.024 mas, corresponding to a distance of 2.39 +- 0.14 kpc, significantly lower than the previously accepted value. This model-independent estimate is the most accurate distance to a Galactic stellar-mass black hole measured to date. With this new distance, we confirm that the source was not super-Eddington during its 1989 outburst. The fitted distance and proper motion imply that the black hole in this system likely formed in a supernova, with the peculiar velocity being consistent with a recoil (Blaauw) kick. The size of the quiescent jets inferred to exist in this system is <1.4 AU at 22 GHz. Astrometric observations of a larger sample of such systems would provide useful insights into the formation and properties of accreting stellar-mass black holes.

  19. College Students' Judgment of Others Based on Described Eating Pattern

    ERIC Educational Resources Information Center

    Pearson, Rebecca; Young, Michael

    2008-01-01

    Background: The literature available on attitudes toward eating patterns and people choosing various foods suggests the possible importance of "moral" judgments and desirable personality characteristics associated with the described eating patterns. Purpose: This study was designed to replicate and extend a 1993 study of college students'…

  20. Method for describing fractures in subterranean earth formations

    DOEpatents

    Shuck, Lowell Z.

    1977-01-01

    The configuration and directional orientation of natural or induced fractures in subterranean earth formations are described by introducing a liquid explosive into the fracture, detonating the explosive, and then monitoring the resulting acoustic emissions with strategically placed acoustic sensors as the explosion propagates through the fracture at a known rate.

  1. 25. VIEW LOOKING EAST THROUGH 'TUNNEL' DESCRIBED ABOVE. RAILCAR LOADING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. VIEW LOOKING EAST THROUGH 'TUNNEL' DESCRIBED ABOVE. RAILCAR LOADING TUBES AT TOP FOREGROUND, SPERRY CORN ELEVATOR COMPLEX AT RIGHT AND ADJOINING WAREHOUSE AT LEFT - Sperry Corn Elevator Complex, Weber Avenue (North side), West of Edison Street, Stockton, San Joaquin County, CA

  2. Describing an "Effective" Principal: Perceptions of the Central Office Leaders

    ERIC Educational Resources Information Center

    Parylo, Oksana; Zepeda, Sally J.

    2014-01-01

    The purpose of this qualitative study was to examine how district leaders of two school systems in the USA describe an effective principal. Membership categorisation analysis revealed that district leaders believed an effective principal had four major categories of characteristics: (1) documented characteristics (having a track record and being a…

  3. New North American Chrysauginae (Pyralidae) described by Cashatt (1968)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The dissertation entitled “Revision of the Chrysauginae of North America” included new taxa that were never published and do not meet the requirements for availability by the International Code of Nomenclature. Therefore, the following taxa from this dissertation are described and illustrated: Arta ...

  4. A General Problem Describer for Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Wools, Ronald Joe

    Currently in computer-assisted instruction (CAI) systems a number of problems are presented to each student during a session, with each individual problem being specified by the author of the session. A better approach might be to provide the author with a language in which he can describe to the computer the general type of problem he wants his…

  5. 23. FISH CONVEYOR Conveyor described in Photo No. 21. A ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. FISH CONVEYOR Conveyor described in Photo No. 21. A portion of a second conveyor is seen on the left. Vertical post knocked askew and cracked cement base of the conveyor, attest to the condition of the building. - Hovden Cannery, 886 Cannery Row, Monterey, Monterey County, CA

  6. Learning Communities and Community Development: Describing the Process.

    ERIC Educational Resources Information Center

    Moore, Allen B.; Brooks, Rusty

    2000-01-01

    Describes features of learning communities: they transform themselves, share wisdom and recognition, bring others in, and share results. Provides the case example of the Upper Savannah River Economic Coalition. Discusses actions of learning communities, barriers to their development, and future potential. (SK)

  7. Tools for describing the reference architecture for space data systems

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Yamada, Takahiro

    2004-01-01

    This paper has briefly presented the Reference Architecture for Space Data Systems (RASDS) that is being developed by the CCSDS Systems Architecture Working Group (SAWG). The SAWG generated some sample architectures (spacecraft onboard architectures, space link architectures, cross-support architectures) using this RASDS approach, and RASDS was proven to be a powerful tool for describing and relating different space data system architectures.

  8. An Evolving Framework for Describing Student Engagement in Classroom Activities

    ERIC Educational Resources Information Center

    Azevedo, Flavio S.; diSessa, Andrea A.; Sherin, Bruce L.

    2012-01-01

    Student engagement in classroom activities is usually described as a function of factors such as human needs, affect, intention, motivation, interests, identity, and others. We take a different approach and develop a framework that models classroom engagement as a function of students' "conceptual competence" in the "specific content" (e.g., the…

  9. Describing Elementary Teachers' Operative Systems: A Case Study

    ERIC Educational Resources Information Center

    Dotger, Sharon; McQuitty, Vicki

    2014-01-01

    This case study introduces the notion of an operative system to describe elementary teachers' knowledge and practice. Drawing from complex systems theory, the operative system is defined as the network of knowledge and practices that constituted teachers' work within a lesson study cycle. Data were gathered throughout a lesson study…

  10. Describing Soils: Calibration Tool for Teaching Soil Rupture Resistance

    ERIC Educational Resources Information Center

    Seybold, C. A.; Harms, D. S.; Grossman, R. B.

    2009-01-01

    Rupture resistance is a measure of the strength of a soil to withstand an applied stress or resist deformation. In soil survey, during routine soil descriptions, rupture resistance is described for each horizon or layer in the soil profile. The lower portion of the rupture resistance classes are assigned based on rupture between thumb and…

  11. How Vocational Teachers Describe Their Vocational Teacher Identity

    ERIC Educational Resources Information Center

    Köpsén, Susanne

    2014-01-01

    Given the current demands of Swedish vocational education and the withdrawal of the requirement for formal teacher competence in vocational subject teachers, the aim of this article is to develop knowledge of what it means to be a vocational subject teacher in an upper secondary school, i.e. how vocational subject teachers describe their…

  12. Judgments about Forces in Described Interactions between Objects

    ERIC Educational Resources Information Center

    White, Peter A.

    2011-01-01

    In 4 experiments, participants made judgments about forces exerted and resistances put up by objects involved in described interactions. Two competing hypotheses were tested: (1) that judgments are derived from the same knowledge base that is thought to be the source of perceptual impressions of forces that occur with visual stimuli, and (2) that…

  13. Describing Middle School Students' Organization of Statistical Data.

    ERIC Educational Resources Information Center

    Johnson, Yolanda; Hofbauer, Pamela

    The purpose of this study was to describe how middle school students physically arrange and organize statistical data. A case-study analysis was used to define and characterize the styles in which students handle, organize, and group statistical data. A series of four statistical tasks (Mooney, Langrall, Hofbauer, & Johnson, 2001) were given to…

  14. School District Personnel Describe One Example of Effective Change Implementation.

    ERIC Educational Resources Information Center

    Jones, Toni Griego

    Three large urban school districts located in the Midwest, Southwest, and West Coast regions were involved in a study designed to reveal district personnel's perceptions of change within their school district. After describing the study, this document analyzes perceptions of change related to one district's new bilingual program that was…

  15. Describing Acupuncture: A New Challenge for Technical Communicators.

    ERIC Educational Resources Information Center

    Karanikas, Marianthe

    1997-01-01

    Considers acupuncture as an increasingly popular alternative medical therapy, but difficult to describe in technical communication. Notes that traditional Chinese medical explanations of acupuncture are unscientific, and that scientific explanations of acupuncture are inconclusive. Finds that technical communicators must translate acupuncture for…

  16. Electronic Health Records: Describing Technological Stressors of Nurse Educators.

    PubMed

    Burke, Mary S; Ellis, D Michele

    2016-01-01

    The purpose of this study was to describe the technological stressors that nurse educators experienced when using electronic health records while teaching clinical courses. Survey results indicated that educators had mild to moderate technological stress when teaching the use of electronic health records to students in clinical nursing courses. PMID:26164324

  17. Superintendents Describe Their Leadership Styles: Implications for Practice

    ERIC Educational Resources Information Center

    Bird, James J.; Wang, Chuang

    2013-01-01

    Superintendents from eight southeastern United States school districts self-described their leadership styles across the choices of autocratic, laissez-faire, democratic, situational, servant, or transformational. When faced with this array of choices, the superintendents chose with arguable equitableness, indicating that successful leaders can…

  18. Whipple Observations

    NASA Astrophysics Data System (ADS)

    Trangsrud, A.

    2015-12-01

    The solar system that we know today was shaped dramatically by events in its dynamic formative years. These events left their signatures at the distant frontier of the solar system, in the small planetesimal relics that populate the vast Oort Cloud, the Scattered Disk, and the Kuiper Belt. To peer in to the history and evolution of our solar system, the Whipple mission will survey small bodies in the large volume that begins beyond the orbit of Neptune and extends out to thousands of AU. Whipple detects these objects when they occult distant stars. The distance and size of the occulting object is reconstructed from well-understood diffraction effects in the object's shadow. Whipple will observe tens of thousands of stars simultaneously with high observing efficiency, accumulating roughly a billion "star-hours" of observations over its mission life. Here we describe the Whipple observing strategy, including target selection and scheduling.

  19. Describing temperament in an ungulate: a multidimensional approach.

    PubMed

    Graunke, Katharina L; Nürnberg, Gerd; Repsilber, Dirk; Puppe, Birger; Langbein, Jan

    2013-01-01

    Studies on animal temperament have often described temperament using a one-dimensional scale, whereas theoretical framework has recently suggested two or more dimensions using terms like "valence" or "arousal" to describe these dimensions. Yet, the valence or assessment of a situation is highly individual. The aim of this study was to provide support for the multidimensional framework with experimental data originating from an economically important species (Bos taurus). We tested 361 calves at 90 days post natum (dpn) in a novel-object test. Using a principal component analysis (PCA), we condensed numerous behaviours into fewer variables to describe temperament and correlated these variables with simultaneously measured heart rate variability (HRV) data. The PCA resulted in two behavioural dimensions (principal components, PC): novel-object-related (PC 1) and exploration-activity-related (PC 2). These PCs explained 58% of the variability in our data. The animals were distributed evenly within the two behavioural dimensions independent of their sex. Calves with different scores in these PCs differed significantly in HRV, and thus in the autonomous nervous system's activity. Based on these combined behavioural and physiological data we described four distinct temperament types resulting from two behavioural dimensions: "neophobic/fearful--alert", "interested--stressed", "subdued/uninterested--calm", and "neoophilic/outgoing--alert". Additionally, 38 calves were tested at 90 and 197 dpn. Using the same PCA-model, they correlated significantly in PC 1 and tended to correlate in PC 2 between the two test ages. Of these calves, 42% expressed a similar behaviour pattern in both dimensions and 47% in one. No differences in temperament scores were found between sexes or breeds. In conclusion, we described distinct temperament types in calves based on behavioural and physiological measures emphasising the benefits of a multidimensional approach. PMID:24040289

  20. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  1. Key of Packaged Grain Quantity Recognition - - Research on Processing and Describing of "fish and Describing of "fish Scale Body"

    NASA Astrophysics Data System (ADS)

    Lin, Ying; Fang, Xinglin; Sun, Yueheng; Sun, Yanhong

    The key to identifying the packaged grain is the shape of package, and the key to identifying shape is processing and describing the boundary of package. Based on a lot of analysis and experiment, this article select the canny operator and chain code to process and describe the boundary of package. Aiming at the boundary is not absolute connectivity, the closure operation of Mathematical Morphology is introduced to do pretreatment on binary image of packaged grain. Finally the boundary is absolute connectivity. Experiments show that the proposed method enhances the anti-jamming and robustness of edge detection.

  2. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  3. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  4. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  5. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  6. How Accurate are SuperCOSMOS Positions?

    NASA Astrophysics Data System (ADS)

    Schaefer, Adam; Hunstead, Richard; Johnston, Helen

    2014-02-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of 0.16 arcsec in right ascension and declination. While overall systematic offsets were < 0.1 arcsec in each hemisphere, both the systematics and scatter were greater in the north.

  7. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  8. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  9. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  10. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography

    PubMed Central

    Haley, William E.; Ibrahim, El-Sayed H.; Qu, Mingliang; Cernigliaro, Joseph G.; Goldfarb, David S.; McCollough, Cynthia H.

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  11. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  12. A model describing vestibular detection of body sway motion.

    NASA Technical Reports Server (NTRS)

    Nashner, L. M.

    1971-01-01

    An experimental technique was developed which facilitated the formulation of a quantitative model describing vestibular detection of body sway motion in a postural response mode. All cues, except vestibular ones, which gave a subject an indication that he was beginning to sway, were eliminated using a specially designed two-degree-of-freedom platform; body sway was then induced and resulting compensatory responses at the ankle joints measured. Hybrid simulation compared the experimental results with models of the semicircular canals and utricular otolith receptors. Dynamic characteristics of the resulting canal model compared closely with characteristics of models which describe eye movement and subjective responses to body rotational motions. The average threshold level, in the postural response mode, however, was considerably lower. Analysis indicated that the otoliths probably play no role in the initial detection of body sway motion.

  13. A gene feature enumeration approach for describing HLA allele polymorphism.

    PubMed

    Mack, Steven J

    2015-12-01

    HLA genotyping via next generation sequencing (NGS) poses challenges for the use of HLA allele names to analyze and discuss sequence polymorphism. NGS will identify many new synonymous and non-coding HLA sequence variants. Allele names identify the types of nucleotide polymorphism that define an allele (non-synonymous, synonymous and non-coding changes), but do not describe how polymorphism is distributed among the individual features (the flanking untranslated regions, exons and introns) of a gene. Further, HLA alleles cannot be named in the absence of antigen-recognition domain (ARD) encoding exons. Here, a system for describing HLA polymorphism in terms of HLA gene features (GFs) is proposed. This system enumerates the unique nucleotide sequences for each GF in an HLA gene, and records these in a GF enumeration notation that allows both more granular dissection of allele-level HLA polymorphism and the discussion and analysis of GFs in the absence of ARD-encoding exon sequences. PMID:26416087

  14. Dynamics of dislocations described as evolving curves interacting with obstacles

    NASA Astrophysics Data System (ADS)

    Pauš, Petr; Beneš, Michal; Kolář, Miroslav; Kratochvíl, Jan

    2016-03-01

    In this paper we describe the model of glide dislocation interaction with obstacles based on the planar curve dynamics. The dislocations are represented as smooth curves evolving in a slip plane according to the mean curvature motion law, and are mathematically described by the parametric approach. We enhance the parametric model by employing so called tangential redistribution of curve points to increase the stability during numerical computation. We developed additional algorithms for topological changes (i.e. merging and splitting of dislocation curves) enabling a detailed modelling of dislocation interaction with obstacles. The evolving dislocations are approximated as a moving piece-wise linear curves. The obstacles are represented as idealized circular areas of a repulsive stress. Our model is numerically solved by means of semi-implicit flowing finite volume method. We present results of qualitative and quantitative computational studies where we demonstrate the topological changes and discuss the effect of tangential redistribution of curve points on computational results.

  15. Describing spatial pattern in stream networks: A practical approach

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  16. Cooperation with school nurses described by Finnish sixth graders.

    PubMed

    Mäenpää, Tiina; Paavilainen, Eija; Astedt-Kurki, Päivi

    2007-10-01

    This paper deals with research on cooperation with the school nurse described by sixth graders. The data were collected via six focus group interviews in 2003-2004. Twenty-two sixth graders (aged 11-12 years) participated in the research. The data were analysed by the constant comparison method based on grounded theory. The analysis yielded a number of concepts that describe the basis of the cooperation: the trusted expertise of the school nurse, informative interaction with the family and knowing the family situation. The cooperation consisted of supporting the pupil's growth and development, need for individual counselling and supporting coping at school. The cooperation was characterized by an open atmosphere and friendliness, a low level of reciprocity, the school nurse's stereotyped activities and respect for the pupil's privacy. Pupils' experiences and perspectives can be used to develop more holistic strategies for the school health service. PMID:17883717

  17. A geostatistical approach for describing spatial pattern in stream networks

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  18. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  19. An accurate model potential for alkali neon systems.

    PubMed

    Zanuttini, D; Jacquet, E; Giglio, E; Douady, J; Gervais, B

    2009-12-01

    We present a detailed investigation of the ground and lowest excited states of M-Ne dimers, for M=Li, Na, and K. We show that the potential energy curves of these Van der Waals dimers can be obtained accurately by considering the alkali neon systems as one-electron systems. Following previous authors, the model describes the evolution of the alkali valence electron in the combined potentials of the alkali and neon cores by means of core polarization pseudopotentials. The key parameter for an accurate model is the M(+)-Ne potential energy curve, which was obtained by means of ab initio CCSD(T) calculation using a large basis set. For each MNe dimer, a systematic comparison with ab initio computation of the potential energy curve for the X, A, and B states shows the remarkable accuracy of the model. The vibrational analysis and the comparison with existing experimental data strengthens this conclusion and allows for a precise assignment of the vibrational levels. PMID:19968334

  20. A fast and accurate decoder for underwater acoustic telemetry

    NASA Astrophysics Data System (ADS)

    Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  1. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162

  2. An alternative to soil taxonomy for describing key soil characteristics

    USGS Publications Warehouse

    Duniway, Michael C.; Miller, Mark E.; Brown, Joel R.; Toevs, Gordon

    2013-01-01

    is not a simple task. Furthermore, because the US system of soil taxonomy is not applied universally, its utility as a means for effectively describing soil characteristics to readers in other countries is limited. Finally, and most importantly, even at the finest level of soil classification there are often large within-taxa variations in critical properties that can determine ecosystem responses to drivers such as climate and land-use change.

  3. New model describing the dynamical behaviour of penetration rates

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru; Minagawa, Hiroe; Chiba, Michiko

    2013-02-01

    We propose a hierarchical logistic equation as a model to describe the dynamical behaviour of a penetration rate of a prevalent stuff. In this model, a memory, how many people who already possess it a person who does not process it yet met, is considered, which does not exist in the logistic model. As an application, we apply this model to iPod sales data, and find that this model can approximate the data much better than the logistic equation.

  4. Magnet hospital nurses describe control over nursing practice.

    PubMed

    Kramer, Marlene; Schmalenberg, Claudia E

    2003-06-01

    Staff nurses describe control over nursing practice (C/NP) as a professional nursing function made up of a variety of activities and outcomes. Greater acclaim, status, and prestige for nursing in the organization are viewed as a result, not a precursor, of C/NP. Interviews with 279 staff nurses working in 14 magnet hospitals indicated that effective C/NP requires some kind of empowered, formal organizational structure, extends beyond clinical decision making at the patient care interface, and is the same as or highly similar to what the literature describes as professional autonomy. From constant comparative analysis of nurses' descriptions of C/NP activities, five ranked categories of this real-life event emerged. The basis for the categories and ranking was "who owned the problem, issue, and solution" and the "degree of effectiveness of control" as reflected in visibility, viability, and recognition of a formal structure allowing and encouraging nurses' control over practice. Hospital mergers and structural reorganization were reported to negatively affect the structure needed for effective C/NP. Almost 60% of these magnet hospital staff nurses stated and/or described little or no C/NP. PMID:12790058

  5. How to describe genes: enlightenment from the quaternary number system.

    PubMed

    Ma, Bin-Guang

    2007-01-01

    As an open problem, computational gene identification has been widely studied, and many gene finders (software) become available today. However, little attention has been given to the problem of describing the common features of known genes in databanks to transform raw data into human understandable knowledge. In this paper, we draw attention to the task of describing genes and propose a trial implementation by treating DNA sequences as quaternary numbers. Under such a treatment, the common features of genes can be represented by a "position weight function", the core concept for a number system. In principle, the "position weight function" can be any real-valued function. In this paper, by approximating the function using trigonometric functions, some characteristic parameters indicating single nucleotide periodicities were obtained for the bacteria Escherichia coli K12's genome and the eukaryote yeast's genome. As a byproduct of this approach, a single-nucleotide-level measure is derived that complements codon-based indexes in describing the coding quality and expression level of an open reading frame (ORF). The ideas presented here have the potential to become a general methodology for biological sequence analysis. PMID:16945479

  6. Describing, Analysing and Judging Language Codes in Cinematic Discourse

    ERIC Educational Resources Information Center

    Richardson, Kay; Queen, Robin

    2012-01-01

    In this short commentary piece, the authors stand back from many of the specific details in the seven papers which constitute the special issue, and offer some observations which attempt to identify and assess points of similarity and difference amongst them, under a number of different general headings. To the extent that the "sociolinguistics of…

  7. Accurate multiplex gene synthesis from programmable DNA microchips

    NASA Astrophysics Data System (ADS)

    Tian, Jingdong; Gong, Hui; Sheng, Nijing; Zhou, Xiaochuan; Gulari, Erdogan; Gao, Xiaolian; Church, George

    2004-12-01

    Testing the many hypotheses from genomics and systems biology experiments demands accurate and cost-effective gene and genome synthesis. Here we describe a microchip-based technology for multiplex gene synthesis. Pools of thousands of `construction' oligonucleotides and tagged complementary `selection' oligonucleotides are synthesized on photo-programmable microfluidic chips, released, amplified and selected by hybridization to reduce synthesis errors ninefold. A one-step polymerase assembly multiplexing reaction assembles these into multiple genes. This technology enabled us to synthesize all 21 genes that encode the proteins of the Escherichia coli 30S ribosomal subunit, and to optimize their translation efficiency in vitro through alteration of codon bias. This is a significant step towards the synthesis of ribosomes in vitro and should have utility for synthetic biology in general.

  8. Accurate Determination of Conformational Transitions in Oligomeric Membrane Proteins

    PubMed Central

    Sanz-Hernández, Máximo; Vostrikov, Vitaly V.; Veglia, Gianluigi; De Simone, Alfonso

    2016-01-01

    The structural dynamics governing collective motions in oligomeric membrane proteins play key roles in vital biomolecular processes at cellular membranes. In this study, we present a structural refinement approach that combines solid-state NMR experiments and molecular simulations to accurately describe concerted conformational transitions identifying the overall structural, dynamical, and topological states of oligomeric membrane proteins. The accuracy of the structural ensembles generated with this method is shown to reach the statistical error limit, and is further demonstrated by correctly reproducing orthogonal NMR data. We demonstrate the accuracy of this approach by characterising the pentameric state of phospholamban, a key player in the regulation of calcium uptake in the sarcoplasmic reticulum, and by probing its dynamical activation upon phosphorylation. Our results underline the importance of using an ensemble approach to characterise the conformational transitions that are often responsible for the biological function of oligomeric membrane protein states. PMID:26975211

  9. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  10. Accurate localization of needle entry point in interventional MRI.

    PubMed

    Daanen, V; Coste, E; Sergent, G; Godart, F; Vasseur, C; Rousseau, J

    2000-10-01

    In interventional magnetic resonance imaging (MRI), the systems designed to help the surgeon during biopsy must provide accurate knowledge of the positions of the target and also the entry point of the needle on the skin of the patient. In some cases, this needle entry point can be outside the B(0) homogeneity area, where the distortions may be larger than a few millimeters. In that case, major correction for geometric deformation must be performed. Moreover, the use of markers to highlight the needle entry point is inaccurate. The aim of this study was to establish a three-dimensional coordinate correction according to the position of the entry point of the needle. We also describe a 2-degree of freedom electromechanical device that is used to determine the needle entry point on the patient's skin with a laser spot. PMID:11042649

  11. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach. PMID:19163797

  12. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    NASA Astrophysics Data System (ADS)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  13. HERMES: A Model to Describe Deformation, Burning, Explosion, and Detonation

    SciTech Connect

    Reaugh, J E

    2011-11-22

    HERMES (High Explosive Response to MEchanical Stimulus) was developed to fill the need for a model to describe an explosive response of the type described as BVR (Burn to Violent Response) or HEVR (High Explosive Violent Response). Characteristically this response leaves a substantial amount of explosive unconsumed, the time to reaction is long, and the peak pressure developed is low. In contrast, detonations characteristically consume all explosive present, the time to reaction is short, and peak pressures are high. However, most of the previous models to describe explosive response were models for detonation. The earliest models to describe the response of explosives to mechanical stimulus in computer simulations were applied to intentional detonation (performance) of nearly ideal explosives. In this case, an ideal explosive is one with a vanishingly small reaction zone. A detonation is supersonic with respect to the undetonated explosive (reactant). The reactant cannot respond to the pressure of the detonation before the detonation front arrives, so the precise compressibility of the reactant does not matter. Further, the mesh sizes that were practical for the computer resources then available were large with respect to the reaction zone. As a result, methods then used to model detonations, known as {beta}-burn or program burn, were not intended to resolve the structure of the reaction zone. Instead, these methods spread the detonation front over a few finite-difference zones, in the same spirit that artificial viscosity is used to spread the shock front in inert materials over a few finite-difference zones. These methods are still widely used when the structure of the reaction zone and the build-up to detonation are unimportant. Later detonation models resolved the reaction zone. These models were applied both to performance, particularly as it is affected by the size of the charge, and to situations in which the stimulus was less than that needed for reliable

  14. Experimental verification of a model describing the intensity distribution from a single mode optical fiber

    SciTech Connect

    Moro, Erik A; Puckett, Anthony D; Todd, Michael D

    2011-01-24

    The intensity distribution of a transmission from a single mode optical fiber is often approximated using a Gaussian-shaped curve. While this approximation is useful for some applications such as fiber alignment, it does not accurately describe transmission behavior off the axis of propagation. In this paper, another model is presented, which describes the intensity distribution of the transmission from a single mode optical fiber. A simple experimental setup is used to verify the model's accuracy, and agreement between model and experiment is established both on and off the axis of propagation. Displacement sensor designs based on the extrinsic optical lever architecture are presented. The behavior of the transmission off the axis of propagation dictates the performance of sensor architectures where large lateral offsets (25-1500 {micro}m) exist between transmitting and receiving fibers. The practical implications of modeling accuracy over this lateral offset region are discussed as they relate to the development of high-performance intensity modulated optical displacement sensors. In particular, the sensitivity, linearity, resolution, and displacement range of a sensor are functions of the relative positioning of the sensor's transmitting and receiving fibers. Sensor architectures with high combinations of sensitivity and displacement range are discussed. It is concluded that the utility of the accurate model is in its predicative capability and that this research could lead to an improved methodology for high-performance sensor design.

  15. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  16. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962

  17. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  18. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  19. Effect of Display Color on Pilot Performance and Describing Functions

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1997-01-01

    A study has been conducted with the full-spectrum, calligraphic, computer-generated display system to determine the effect of chromatic content of the visual display upon pilot performance during the landing approach maneuver. This study utilizes a new digital chromatic display system, which has previously been shown to improve the perceived fidelity of out-the-window display scenes, and presents the results of an experiment designed to determine the effects of display color content by the measurement of both vertical approach performance and pilot-describing functions. This method was selected to more fully explore the effects of visual color cues used by the pilot. Two types of landing approaches were made: dynamic and frozen range, with either a landing approach scene or a perspective array display. The landing approach scene was presented with either red runway lights and blue taxiway lights or with the colors reversed, and the perspective array with red lights, blue lights, or red and blue lights combined. The vertical performance measures obtained in this experiment indicated that the pilots performed best with the blue and red/blue displays. and worst with the red displays. The describing-function system analysis showed more variation with the red displays. The crossover frequencies were lowest with the red displays and highest with the combined red/blue displays, which provided the best overall tracking, performance. Describing-function performance measures, vertical performance measures, and pilot opinion support the hypothesis that specific colors in displays can influence the pilots' control characteristics during the final approach.

  20. A Physiology-Based Model Describing Heterogeneity in Glucose Metabolism

    PubMed Central

    Maas, Anne H.; Rozendaal, Yvonne J. W.; van Pul, Carola; Hilbers, Peter A. J.; Cottaar, Ward J.; Haak, Harm R.; van Riel, Natal A. W.

    2014-01-01

    Background: Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. Method: The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. Results: All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. Conclusion: We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. PMID:25526760

  1. A proposal to describe a phenomenon of expanding language

    NASA Astrophysics Data System (ADS)

    Swietorzecka, Kordula

    Changes of knowledge, convictions or beliefs are subjects of interest in frame of so called epistemic logic. There are various proposed descriptions of a process (or its results) in which so a called agent may invent certain changes in a set of sentences that he had already chosen as a point of his knowledge, convictions or beliefs (and this is also considered in case of many agents). In the presented paper we are interested in the changeability of an agent's language which is by its own independent from already mentioned changes. Modern epistemic formalizations assume that the agent uses a fixed (and so we could say: static) language in which he expresses his various opinions which may change. Our interest is to simulate a situation when a language is extended by adding to it new expressions which were not known by the agent so he couldn't even consider them as subjects of his opinions. Actually such a phenomenon happens both in natural and scientific languages. Let us mention a fact of expanding languages in process of learning or in result of getting of new data about some described domain. We propose a simple idealization of extending sentential language used by one agent. Actually the language is treated as a family of so called n-languages which get some epistemic interpretation. Proposed semantics enables us to distinguish between two different types of changes - these which occur because of changing agent's convictions about logical values of some n-sentences - we describe them using one place operator C to be read it changes that - and changes that consist in increasing the level of n-language by adding to it new expressions. However the second type of change - symbolized by variable G - may be also considered independently of the first one. The logical frame of our considerations comes from and it was originally used to describe Aristotelian theory of substantial changes. This time we apply the mentioned logic in epistemology.

  2. Newly Described Clinical and Immunopathological Feature of Dermatitis Herpetiformis

    PubMed Central

    Bonciolini, Veronica; Bonciani, Diletta; Verdelli, Alice; D'Errico, Antonietta; Antiga, Emiliano; Fabbri, Paolo; Caproni, Marzia

    2012-01-01

    Dermatitis herpetiformis (DH) is an inflammatory cutaneous disease with typical histopathological and immunopathological findings clinically characterized by intensely pruritic polymorphic lesions with a chronic-relapsing course. In addition to classic clinical manifestations of DH, atypical variants are more and more frequently reported and histological and immunological are added to them, whereas the impact on quality of life of patients with DH is increasingly important to a certain diagnosis. The aim of this paper is to describe all the possible clinical, histological, and immunological variants of DH in order to facilitate the diagnosis of a rare disease and, therefore, little known. PMID:22701503

  3. Feshbach resonance described by boson-fermion coupling

    SciTech Connect

    Domanski, T.

    2003-07-01

    We consider a possibility to describe the Feshbach resonance in terms of the boson-fermion (BF) model. Using such a model, we show that after a gradual disentangling of the boson from fermion subsystem, the resonant-type scattering between fermions is indeed generated. We decouple the subsystems via (a) the single step and (b) the continuous canonical transformation. With the second one, we investigate the feedback effects effectively leading to the finite amplitude of the scattering strength. We study them in detail in the normal T>T{sub c} and superconducting T{<=}T{sub c} states.

  4. Can CA describe collective effects of polluting agents?

    NASA Astrophysics Data System (ADS)

    Troisi, A.

    2015-03-01

    Pollution represents one of the most relevant issues of our time. Several studies are on stage but, generally, they do not consider competitive effects, paying attention only to specific agents and their impact. In this paper, it is suggested a different scheme. At first, it is proposed a formal model of competitive noxious effects. Second, by generalizing a previous algorithm capable of describing urban growth, it is developed a cellular automata (CA) model that provides the effective impact of a variety of pollutants. The final achievement is a simulation tool that can model pollution combined effects and their dynamical evolution in relation to anthropized environments.

  5. The Global Geodetic Infrastructure for Accurate Monitoring of Earth Systems

    NASA Astrophysics Data System (ADS)

    Weston, Neil; Blackwell, Juliana; Wang, Yan; Willis, Zdenka

    2014-05-01

    The National Geodetic Survey (NGS) and the Integrated Ocean Observing System (IOOS), two Program Offices within the National Ocean Service, NOAA, routinely collect, analyze and disseminate observations and products from several of the 17 critical systems identified by the U.S. Group on Earth Observations. Gravity, sea level monitoring, coastal zone and ecosystem management, geo-hazards and deformation monitoring and ocean surface vector winds are the primary Earth systems that have active research and operational programs in NGS and IOOS. These Earth systems collect terrestrial data but most rely heavily on satellite-based sensors for analyzing impacts and monitoring global change. One fundamental component necessary for monitoring via satellites is having a stable, global geodetic infrastructure where an accurate reference frame is essential for consistent data collection and geo-referencing. This contribution will focus primarily on system monitoring, coastal zone management and global reference frames and how the scientific contributions from NGS and IOOS continue to advance our understanding of the Earth and the Global Geodetic Observing System.

  6. Macro parameters describing the mechanical behavior of classical guitars.

    PubMed

    Elie, Benjamin; Gautier, François; David, Bertrand

    2012-12-01

    Since the 1960s and 1970s, researchers have proposed simplified models using only a few parameters to describe the vibro-acoustical behavior of string instruments in the low-frequency range. This paper presents a method for deriving and estimating a few important parameters or features describing the mechanical behavior of classical guitars over a broader frequency range. These features are selected under the constraint that the measurements may readily be made in the workshop of an instrument maker. The computations of these features use estimates of the modal parameters over a large frequency range, made with the high-resolution subspace ESPRIT algorithm (Estimation of Signal Parameters via Rotational Invariant Techniques) and the signal enumeration technique ESTER (ESTimation of ERror). The methods are applied to experiments on real metal and wood plates and numerical simulations of them. The results on guitars show a nearly constant mode density in the mid- and high-frequency ranges, as it is found for a flat panel. Four features are chosen as characteristic parameters of this equivalent plate: Mass, rigidity, characteristic admittance, and the mobility deviation. Application to a set of 12 guitars indicates that these features are good candidates to discriminate different classes of classical guitars. PMID:23231130

  7. Differentiable Neural Substrates for Learned and Described Value and Risk

    PubMed Central

    FitzGerald, Thomas H.B.; Seymour, Ben; Bach, Dominik R.; Dolan, Raymond J.

    2010-01-01

    Summary Studies of human decision making emerge from two dominant traditions: learning theorists [1–3] study choices in which options are evaluated on the basis of experience, whereas behavioral economists and financial decision theorists study choices in which the key decision variables are explicitly stated. Growing behavioral evidence suggests that valuation based on these different classes of information involves separable mechanisms [4–8], but the relevant neuronal substrates are unknown. This is important for understanding the all-too-common situation in which choices must be made between alternatives that involve one or another kind of information. We studied behavior and brain activity while subjects made decisions between risky financial options, in which the associated utilities were either learned or explicitly described. We show a characteristic effect in subjects' behavior when comparing information acquired from experience with that acquired from description, suggesting that these kinds of information are treated differently. This behavioral effect was reflected neurally, and we show differential sensitivity to learned and described value and risk in brain regions commonly associated with reward processing. Our data indicate that, during decision making under risk, both behavior and the neural encoding of key decision variables are strongly influenced by the manner in which value information is presented. PMID:20888231

  8. Colour in flux: describing and printing colour in art

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna

    2008-01-01

    This presentation will describe artists, practitioners and scientists, who were interested in developing a deeper psychological, emotional and practical understanding of the human visual system who were working with wavelength, paint and other materials. From a selection of prints at The Prints and Drawings Department at Tate London, the presentation will refer to artists who were motivated by issues relating to how colour pigment was mixed and printed, to interrogate and explain colour perception and colour science, and in art, how artists have used colour to challenge the viewer and how a viewer might describe their experience of colour. The title Colour in Flux refers, not only to the perceptual effect of the juxtaposition of one colour pigment with another, but also to the changes and challenges for the print industry. In the light of screenprinted examples from the 60s and 70s, the presentation will discuss 21 st century ideas on colour and how these notions have informed the Centre for Fine Print Research's (CFPR) practical research in colour printing. The latter part of this presentation will discuss the implications for the need to change methods in mixing inks that moves away from existing colour spaces, from non intuitive colour mixing to bespoke ink sets, colour mixing approaches and colour mixing methods that are not reliant on RGB or CMYK.

  9. In their own words: describing Canadian physician leadership.

    PubMed

    Snell, Anita J; Dickson, Graham; Wirtzfeld, Debrah; Van Aerde, John

    2016-07-01

    Purpose This is the first study to compile statistical data to describe the functions and responsibilities of physicians in formal and informal leadership roles in the Canadian health system. This mixed-methods research study offers baseline data relative to this purpose, and also describes physician leaders' views on fundamental aspects of their leadership responsibility. Design/methodology/approach A survey with both quantitative and qualitative fields yielded 689 valid responses from physician leaders. Data from the survey were utilized in the development of a semi-structured interview guide; 15 physician leaders were interviewed. Findings A profile of Canadian physician leadership has been compiled, including demographics; an outline of roles, responsibilities, time commitments and related compensation; and personal factors that support, engage and deter physicians when considering taking on leadership roles. The role of health-care organizations in encouraging and supporting physician leadership is explicated. Practical implications The baseline data on Canadian physician leaders create the opportunity to determine potential steps for improving the state of physician leadership in Canada; and health-care organizations are provided with a wealth of information on how to encourage and support physician leaders. Using the data as a benchmark, comparisons can also be made with physician leadership as practiced in other nations. Originality/value There are no other research studies available that provide the depth and breadth of detail on Canadian physician leadership, and the embedded recommendations to health-care organizations are informed by this in-depth knowledge. PMID:27397749

  10. CPT: an open system that describes all that you do.

    PubMed

    Thorwarth, William T

    2008-04-01

    The American Medical Association, with the cooperation of multiple major medical specialty societies, including the ACR, responded in 1966 to the need for a complete coding system for describing medical procedures and services with the first publication of Current Procedural Terminology (CPT). This system, now CPT IV, forms the basis of reporting of virtually all inpatient and outpatient services performed by physicians and nonphysician health care providers as well as facilities. This coding system and its maintenance process have evolved in complexity and sophistication, particularly in the past decade, such that it is now integral to all facets of health care, including tracking new and investigational procedures and reporting and monitoring performance measures (read "pay for performance"), in addition to its long-standing use for reporting for reimbursement. To paraphrase a recent automobile commercial, "This is not your father's CPT." The author describes the development of CPT as it exists today, examining the forces that molded its current form, the input opportunities available to medical specialty societies and others, the ever increasing transparency of the CPT maintenance process, and the availability of resources allowing all to stay current. Understanding this system, critical to the practice of all of medicine, including radiology, will aid all health care providers in maintaining the quality, efficiency, and accuracy of their practices' business operations as well as assist them in a world of increasingly complex reporting requirements. PMID:18359442

  11. Towards an accurate dissociative potential for water

    NASA Astrophysics Data System (ADS)

    Akin-Ojo, Omololu

    2014-03-01

    Most models of water describe the molecule as rigid, i.e., with fixed bond angles and bond lengths, or as flexible in which the bond angles and bond lengths vary but the chemical bonds cannot be broken. In this work we present our progress in the development of a water model which allows for the breaking and formation of chemical bonds. The force field was obtained by fitting ab initio (not DFT) energies, forces, and molecular properties. The ability of the model to predict properties of water at ambient and extreme conditions will be presented. We will also report on the modeling of small clusters of water using the dissociative force field.

  12. A simple, sensitive, and accurate alcohol electrode

    SciTech Connect

    Verduyn, C.; Scheffers, W.A.; Van Dijken, J.P.

    1983-04-01

    The construction and performance of an enzyme electrode is described which specifically detects lower primary aliphatic alcohols in aqueous solutions. The electrode consists of a commercial Clark-type oxygen electrode on which alcohol oxidase (E.C. 1.1.3.13) and catalase were immobilized. The decrease in electrode current is linearly proportional to ethanol concentrations betwee 1 and 25 ppm. The response of the electrode remains constant during 400 assays over a period of two weeks. The response time is between 1 and 2 min. Assembly of the electrode takes less than 1 h.

  13. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration. PMID:24524947

  14. Exploring accurate Poisson–Boltzmann methods for biomolecular simulations

    PubMed Central

    Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray

    2013-01-01

    Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709

  15. Accurate measurements of dynamics and reproducibility in small genetic networks

    PubMed Central

    Dubuis, Julien O; Samanta, Reba; Gregor, Thomas

    2013-01-01

    Quantification of gene expression has become a central tool for understanding genetic networks. In many systems, the only viable way to measure protein levels is by immunofluorescence, which is notorious for its limited accuracy. Using the early Drosophila embryo as an example, we show that careful identification and control of experimental error allows for highly accurate gene expression measurements. We generated antibodies in different host species, allowing for simultaneous staining of four Drosophila gap genes in individual embryos. Careful error analysis of hundreds of expression profiles reveals that less than ∼20% of the observed embryo-to-embryo fluctuations stem from experimental error. These measurements make it possible to extract not only very accurate mean gene expression profiles but also their naturally occurring fluctuations of biological origin and corresponding cross-correlations. We use this analysis to extract gap gene profile dynamics with ∼1 min accuracy. The combination of these new measurements and analysis techniques reveals a twofold increase in profile reproducibility owing to a collective network dynamics that relays positional accuracy from the maternal gradients to the pair-rule genes. PMID:23340845

  16. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    PubMed

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434

  17. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  18. Describing the impact of health research: a Research Impact Framework

    PubMed Central

    Kuruvilla, Shyama; Mays, Nicholas; Pleasant, Andrew; Walt, Gill

    2006-01-01

    Background Researchers are increasingly required to describe the impact of their work, e.g. in grant proposals, project reports, press releases and research assessment exercises. Specialised impact assessment studies can be difficult to replicate and may require resources and skills not available to individual researchers. Researchers are often hard-pressed to identify and describe research impacts and ad hoc accounts do not facilitate comparison across time or projects. Methods The Research Impact Framework was developed by identifying potential areas of health research impact from the research impact assessment literature and based on research assessment criteria, for example, as set out by the UK Research Assessment Exercise panels. A prototype of the framework was used to guide an analysis of the impact of selected research projects at the London School of Hygiene and Tropical Medicine. Additional areas of impact were identified in the process and researchers also provided feedback on which descriptive categories they thought were useful and valid vis-à-vis the nature and impact of their work. Results We identified four broad areas of impact: I. Research-related impacts; II. Policy impacts; III. Service impacts: health and intersectoral and IV. Societal impacts. Within each of these areas, further descriptive categories were identified. For example, the nature of research impact on policy can be described using the following categorisation, put forward by Weiss: Instrumental use where research findings drive policy-making; Mobilisation of support where research provides support for policy proposals; Conceptual use where research influences the concepts and language of policy deliberations and Redefining/wider influence where research leads to rethinking and changing established practices and beliefs. Conclusion Researchers, while initially sceptical, found that the Research Impact Framework provided prompts and descriptive categories that helped them

  19. Describing Sequence-Ensemble Relationships for Intrinsically Disordered Proteins

    PubMed Central

    Mao, Albert H.; Lyle, Nicholas; Pappu, Rohit V.

    2014-01-01

    Synopsis Intrinsically disordered proteins participate in important protein-protein and protein-nucleic acid interactions and control cellular phenotypes through their prominence as dynamic organizers of transcriptional, post-transcriptional, and signaling networks. These proteins challenge the tenets of the structure-function paradigm and their functional mechanisms remain a mystery given that they fail to fold autonomously into specific structures. Solving this mystery requires a first principles understanding of the quantitative relationships between information encoded in the sequences of disordered proteins and the ensemble of conformations they sample. Advances in quantifying sequence-ensemble relationships have been facilitated through a four-way synergy between bioinformatics, biophysical experiments, computer simulations, and polymer physics theories. Here, we review these advances and the resultant insights that allow us to develop a concise quantitative framework for describing sequence-ensemble relationships of intrinsically disordered proteins. PMID:23240611

  20. Concepts and methods for describing critical phenomena in fluids

    NASA Technical Reports Server (NTRS)

    Sengers, J. V.; Sengers, J. M. H. L.

    1977-01-01

    The predictions of theoretical models for a critical-point phase transistion in fluids, namely the classical equation with third-degree critical isotherm, that with fifth-degree critical isotherm, and the lattice gas, are reviewed. The renormalization group theory of critical phenomena and the hypothesis of universality of critical behavior supported by this theory are discussed as well as the nature of gravity effects and how they affect cricital-region experimentation in fluids. The behavior of the thermodynamic properties and the correlation function is formulated in terms of scaling laws. The predictions of these scaling laws and of the hypothesis of universality of critical behavior are compared with experimental data for one-component fluids and it is indicated how the methods can be extended to describe critical phenomena in fluid mixtures.

  1. Effects of display format on pilot describing function and remnant

    NASA Technical Reports Server (NTRS)

    Jex, H. R.; Allen, R. W.; Magdaleno, R. E.

    1972-01-01

    As part of a program to develop a comprehensive theory of manual control displays, six display formats were used by three instrument-rated pilots to regulate against random disturbances with a controlled element under both foveal and 10 deg parafoveal viewing conditions. The six display formats were: CRT line, CRT thermometer bar, 14-bar quantized on a CRT, a rotary dial and pointer, and two variations of a moving scale tape-drive. All were scaled to equivalent movement and apparent brightness. Measures included overall performance, describing functions, error remnant power spectra, critical instability scores, and subjective display ratings. The results show that the main effect of display format is on the loop closure properties. Less desirable displays induce lower bandwidth closures with consequent effects on the closed-loop remnant and performance.

  2. Intervention Taxonomy (ITAX): Describing Essential Features of Interventions (HMC)

    PubMed Central

    Czaja, Sara J.; McKay, James R.; Ory, Marcia G; Belle, Steven H

    2010-01-01

    Objectives To identify key features of interventions that need to be considered in the design, execution, and reporting of interventions. Methods Based on prior work on decomposing psychosocial and clinical interventions, current guidelines for describing interventions, and a review of a broad range of intervention studies, we developed a comprehensive intervention taxonomy. Results Specific recommendations, rationales, and definitions of intervention delivery and content characteristics including mode, materials, location, schedule, scripting, and sensitivity to participant characteristics, interventionist characteristics, adaptability, implementation, content strategies, and mechanisms of action are provided. Conclusions Applying this taxonomy will advance intervention science by (a) improving intervention designs, (b) enhancing replication and follow-up of intervention studies, (c) facilitating systematic exploration of the efficacy and effectiveness of intervention components through cross-study analysis, and (d) informing decisions about the feasibility of implementation in broader community settings. PMID:20604704

  3. Using Persistent Homology to Describe Rayleigh-Bénard Convection

    NASA Astrophysics Data System (ADS)

    Tithof, Jeffrey; Suri, Balachandra; Xu, Mu; Kramar, Miroslav; Levanger, Rachel; Mischaikow, Konstantin; Paul, Mark; Schatz, Michael

    2015-11-01

    Complex spatial patterns that exhibit aperiodic dynamics commonly arise in a wide variety of systems in nature and technology. Describing, understanding, and predicting the behavior of such patterns is an open problem. We explore the use of persistent homology (a branch of algebraic topology) to characterize spatiotemporal dynamics in a canonical fluid mechanics problem, Rayleigh Bénard convection. Persistent homology provides a powerful mathematical formalism in which the topological characteristics of a pattern (e.g. the midplane temperature field) are encoded in a so-called persistence diagram. By applying a metric to measure the pairwise distances across multiple persistence diagrams, we can quantify the similarities between different states in a time series. Our results show that persistent homology yields new physical insights into the complex dynamics of large spatially extended systems that are driven far-from-equilibrium. This work is supported under NSF grant DMS-1125302.

  4. A new way of describing the Dirac bands in graphene

    NASA Astrophysics Data System (ADS)

    Kissinger, Gregory; Satpathy, Sashi

    We develop a new way of describing the electronic structure of graphene, by treating the honeycomb lattice as a network of one-dimensional quantum wires. The electrons travel as free particles along these quantum wires and interfere at the three-way junctions formed by the carbon atoms. The model generates the linearly dispersive Dirac cone band structure as well as the chiral nature of the pseudo-spin sublattice wave functions. When vacancies are incorporated, we find that it also reproduces the well known zero mode states. This simple approach might have advantages over other methods for some applications, such as in analyzing electronic transport through graphene nanoribbons. In addition, this finding suggests new ways of constructing Dirac band materials in the laboratory by nano-patterning for investigating Dirac fermions.

  5. Dynamics of rotating fluids described by scalar potentials

    NASA Astrophysics Data System (ADS)

    Seyed-Mahmoud, Behnam; Rochester, Michael

    2006-06-01

    The oscillatory dynamics of a rotating, self-gravitating, stratified, compressible, inviscid fluid body is simplified by an exact description in terms of three scalar fields which are constructed from the dilatation, and the perturbations in pressure and gravitational potential [Seyed-Mahmoud, B., 1994. Wobble/nutation of a rotating ellipsoidal Earth with liquid core: implementation of a new set of equations describing dynamics of rotating fluids M.Sc. Thesis, Memorial University of Newfoundland]. We test the method by applying it to compressible, but neutrally-stratified, models of the Earth's liquid core, including a solid inner core, and compute the frequencies of some of the inertial modes. We conclude the method should be further exploited for astrophysical and geophysical normal mode computations.

  6. A broadly applicable function for describing luminescence dose response

    SciTech Connect

    Burbidge, C. I.

    2015-07-28

    The basic form of luminescence dose response is investigated, with the aim of developing a single function to account for the appearance of linear, superlinear, sublinear, and supralinear behaviors and variations in saturation signal level and rate. A function is assembled based on the assumption of first order behavior in different major factors contributing to measured luminescence-dosimetric signals. Different versions of the function are developed for standardized and non-dose-normalized responses. Data generated using a two trap two recombination center model and experimental data for natural quartz are analyzed to compare results obtained using different signals, measurement protocols, pretreatment conditions, and radiation qualities. The function well describes a range of dose dependent behavior, including sublinear, superlinear, supralinear, and non-monotonic responses and relative response to α and β radiation, based on change in relative recombination and trapping probability affecting signals sourced from a single electron trap.

  7. A modeling approach to describe ZVI-based anaerobic system.

    PubMed

    Xiao, Xiao; Sheng, Guo-Ping; Mu, Yang; Yu, Han-Qing

    2013-10-15

    Zero-valent iron (ZVI) is increasingly being added into anaerobic reactors to enhance the biological conversion of various less biodegradable pollutants (LBPs). Our study aimed to establish a new structure model based on the Anaerobic Digestion Model No. 1 (ADM1) to simulate such a ZVI-based anaerobic reactor. Three new processes, i.e., electron release from ZVI corrosion, H2 formation from ZVI corrosion, and transformation of LBPs, were integrated into ADM1. The established model was calibrated and tested using the experimental data from one published study, and validated using the data from another work. A good relationship between the predicted and measured results indicates that the proposed model was appropriate to describe the performance of the ZVI-based anaerobic system. Our model could provide more precise strategies for the design, development, and application of anaerobic systems especially for treating various LBPs-containing wastewaters. PMID:23932771

  8. Method to describe stochastic dynamics using an optimal coordinate.

    PubMed

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function. PMID:24483410

  9. A broadly applicable function for describing luminescence dose response

    NASA Astrophysics Data System (ADS)

    Burbidge, C. I.

    2015-07-01

    The basic form of luminescence dose response is investigated, with the aim of developing a single function to account for the appearance of linear, superlinear, sublinear, and supralinear behaviors and variations in saturation signal level and rate. A function is assembled based on the assumption of first order behavior in different major factors contributing to measured luminescence-dosimetric signals. Different versions of the function are developed for standardized and non-dose-normalized responses. Data generated using a two trap two recombination center model and experimental data for natural quartz are analyzed to compare results obtained using different signals, measurement protocols, pretreatment conditions, and radiation qualities. The function well describes a range of dose dependent behavior, including sublinear, superlinear, supralinear, and non-monotonic responses and relative response to α and β radiation, based on change in relative recombination and trapping probability affecting signals sourced from a single electron trap.

  10. Diffraction described by virtual particle momentum exchange: the "diffraction force"

    NASA Astrophysics Data System (ADS)

    Mobley, Michael J.

    2011-09-01

    Particle diffraction can be described by an ensemble of particle paths determined through a Fourier analysis of a scattering lattice where the momentum exchange probabilities are defined at the location of scattering, not the point of detection. This description is compatible with optical wave theories and quantum particle models and provides deeper insights to the nature of quantum uncertainty. In this paper the Rayleigh-Sommerfeld and Fresnel-Kirchoff theories are analyzed for diffraction by a narrow slit and a straight edge to demonstrate the dependence of particle scattering on the distance of virtual particle exchange. The quantized momentum exchange is defined by the Heisenberg uncertainty principle and is consistent with the formalism of QED. This exchange of momentum manifests the "diffraction force" that appears to be a universal construct as it applies to neutral and charged particles. This analysis indicates virtual particles might form an exchange channel that bridges the space of momentum exchange.

  11. Describing linguistic information in a behavioural framework: Possible or not?

    SciTech Connect

    De Cooman, G.

    1996-12-31

    The paper discusses important aspects of the representation of linguistic information, using imprecise probabilities with a behavioural interpretation. We define linguistic information as the information conveyed by statements in natural language, but restrict ourselves to simple affirmative statements of the type {open_quote}subject-is-predicate{close_quote}. Taking the behavioural stance, as it is described in detail, we investigate whether it is possible to give a mathematical model for this kind of information. In particular, we evaluate Zadeli`s suggestion that we should use possibility measures to this end. We come to tile conclusion that, generally speaking, possibility measures are possibility models for linguistic information, but that more work should be done in order to evaluate the suggestion that they may be the only ones.

  12. A framework for describing health care delivery organizations and systems.

    PubMed

    Piña, Ileana L; Cohen, Perry D; Larson, David B; Marion, Lucy N; Sills, Marion R; Solberg, Leif I; Zerzan, Judy

    2015-04-01

    Describing, evaluating, and conducting research on the questions raised by comparative effectiveness research and characterizing care delivery organizations of all kinds, from independent individual provider units to large integrated health systems, has become imperative. Recognizing this challenge, the Delivery Systems Committee, a subgroup of the Agency for Healthcare Research and Quality's Effective Health Care Stakeholders Group, which represents a wide diversity of perspectives on health care, created a draft framework with domains and elements that may be useful in characterizing various sizes and types of care delivery organizations and may contribute to key outcomes of interest. The framework may serve as the door to further studies in areas in which clear definitions and descriptions are lacking. PMID:24922130

  13. Complex coastal oceanographic fields can be described by universal multifractals

    NASA Astrophysics Data System (ADS)

    Skákala, Jozef; Smyth, Timothy J.

    2015-09-01

    Characterization of chlorophyll and sea surface temperature (SST) structural heterogeneity using their scaling properties can provide a useful tool to estimate the relative importance of key physical and biological drivers. Seasonal, annual, and also instantaneous spatial distributions of chlorophyll and SST, determined from satellite measurements, in seven different coastal and shelf-sea regions around the UK have been studied. It is shown that multifractals provide a very good approximation to the scaling properties of the data: in fact, the multifractal scaling function is well approximated by universal multifractal theory. The consequence is that all of the statistical information about data structure can be reduced to being described by two parameters. It is further shown that also bathymetry scales in the studied regions as multifractal. The SST and chlorophyll multifractal structures are then explained as an effect of bathymetry and turbulence.

  14. Angular momentum and torque described with the complex octonion

    SciTech Connect

    Weng, Zi-Hua

    2014-08-15

    The paper aims to adopt the complex octonion to formulate the angular momentum, torque, and force etc in the electromagnetic and gravitational fields. Applying the octonionic representation enables one single definition of angular momentum (or torque, force) to combine some physics contents, which were considered to be independent of each other in the past. J. C. Maxwell used simultaneously two methods, the vector terminology and quaternion analysis, to depict the electromagnetic theory. It motivates the paper to introduce the quaternion space into the field theory, describing the physical feature of electromagnetic and gravitational fields. The spaces of electromagnetic field and of gravitational field can be chosen as the quaternion spaces, while the coordinate component of quaternion space is able to be the complex number. The quaternion space of electromagnetic field is independent of that of gravitational field. These two quaternion spaces may compose one octonion space. Contrarily, one octonion space can be separated into two subspaces, the quaternion space and S-quaternion space. In the quaternion space, it is able to infer the field potential, field strength, field source, angular momentum, torque, and force etc in the gravitational field. In the S-quaternion space, it is capable of deducing the field potential, field strength, field source, current continuity equation, and electric (or magnetic) dipolar moment etc in the electromagnetic field. The results reveal that the quaternion space is appropriate to describe the gravitational features, including the torque, force, and mass continuity equation etc. The S-quaternion space is proper to depict the electromagnetic features, including the dipolar moment and current continuity equation etc. In case the field strength is weak enough, the force and the continuity equation etc can be respectively reduced to that in the classical field theory.

  15. Angular momentum and torque described with the complex octonion

    NASA Astrophysics Data System (ADS)

    Weng, Zi-Hua

    2014-08-01

    The paper aims to adopt the complex octonion to formulate the angular momentum, torque, and force etc in the electromagnetic and gravitational fields. Applying the octonionic representation enables one single definition of angular momentum (or torque, force) to combine some physics contents, which were considered to be independent of each other in the past. J. C. Maxwell used simultaneously two methods, the vector terminology and quaternion analysis, to depict the electromagnetic theory. It motivates the paper to introduce the quaternion space into the field theory, describing the physical feature of electromagnetic and gravitational fields. The spaces of electromagnetic field and of gravitational field can be chosen as the quaternion spaces, while the coordinate component of quaternion space is able to be the complex number. The quaternion space of electromagnetic field is independent of that of gravitational field. These two quaternion spaces may compose one octonion space. Contrarily, one octonion space can be separated into two subspaces, the quaternion space and S-quaternion space. In the quaternion space, it is able to infer the field potential, field strength, field source, angular momentum, torque, and force etc in the gravitational field. In the S-quaternion space, it is capable of deducing the field potential, field strength, field source, current continuity equation, and electric (or magnetic) dipolar moment etc in the electromagnetic field. The results reveal that the quaternion space is appropriate to describe the gravitational features, including the torque, force, and mass continuity equation etc. The S-quaternion space is proper to depict the electromagnetic features, including the dipolar moment and current continuity equation etc. In case the field strength is weak enough, the force and the continuity equation etc can be respectively reduced to that in the classical field theory.

  16. Chapter 35: Describing Data and Data Collections in the VO

    NASA Astrophysics Data System (ADS)

    Kent, B. R.; Hanisch, R. J.; Williams, R. D.

    The list of numbers: 19.22, 17.23, 18.11, 16.98, and 15.11, is of little intrinsic interest without information about the context in which they appear. For instance, are these daily closing stock prices for your favorite investment, or are they hourly photometric measurements of an increasingly bright quasar? The information needed to define this context is called metadata. Metadata are data about data. Astronomers are familiar with metadata through the headers of FITS files and the names and units associated with columns in a table or database. In the VO, metadata describe the contents of tables, images, and spectra, as well as aggregate collections of data (archives, surveys) and computational services. Moreover, VO metadata are constructed according to rules that avoid ambiguity and make it clear whether, in the example above, the stock prices are in dollars or euros, or the photometry is Johnson V or Sloan g. Organization of data is important in any scientific discipline. Equally crucial are the descriptions of that data: the organization publishing the data, its creator or the person making it available, what instruments were used, units assigned to measurement, calibration status, and data quality assessment. The Virtual Observatory metadata scheme not only applies to datasets, but to resources as well, including data archive facilities, searchable web forms, and online analysis and display tools. Since the scientific output flowing from large datasets depends greatly on how well the data are described, it is important for users to understand the basics of the metadata scheme in order to locate the data that they want and use it correctly. Metadata are the key to data discovery and data and service interoperability in the Virtual Observatory.

  17. Assessing the State of Substitution Models Describing Noncoding RNA Evolution

    PubMed Central

    Allen, James E.; Whelan, Simon

    2014-01-01

    Phylogenetic inference is widely used to investigate the relationships between homologous sequences. RNA molecules have played a key role in these studies because they are present throughout life and tend to evolve slowly. Phylogenetic inference has been shown to be dependent on the substitution model used. A wide range of models have been developed to describe RNA evolution, either with 16 states describing all possible canonical base pairs or with 7 states where the 10 mismatched nucleotides are reduced to a single state. Formal model selection has become a standard practice for choosing an inferential model and works well for comparing models of a specific type, such as comparisons within nucleotide models or within amino acid models. Model selection cannot function across different sized state spaces because the likelihoods are conditioned on different data. Here, we introduce statistical state-space projection methods that allow the direct comparison of likelihoods between nucleotide models and 7-state and 16-state RNA models. To demonstrate the general applicability of our new methods, we extract 287 RNA families from genomic alignments and perform model selection. We find that in 281/287 families, RNA models are selected in preference to nucleotide models, with simple 7-state RNA models selected for more conserved families with shorter stems and more complex 16-state RNA models selected for more divergent families with longer stems. Other factors, such as the function of the RNA molecule or the GC-content, have limited impact on model selection. Our models and model selection methods are freely available in the open-source PHASE 3.0 software. PMID:24391153

  18. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  19. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-05-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  20. New Claus catalyst tests accurately reflect process conditions

    SciTech Connect

    Maglio, A.; Schubert, P.F.

    1988-09-12

    Methods for testing Claus catalysts are developed that more accurately represent the actual operating conditions in commercial sulfur recovery units. For measuring catalyst activity, an aging method has been developed that results in more meaningful activity data after the catalyst has been aged, because all catalysts undergo rapid initial deactivation in commercial units. An activity test method has been developed where catalysts can be compared at less than equilibrium conversion. A test has also been developed to characterize abrasion loss of Claus catalysts, in contrast to the traditional method of determining physical properties by measuring crush strengths. Test results from a wide range of materials correlated well with actual pneumatic conveyance attrition. Substantial differences in Claus catalyst properties were observed as a result of using these tests.

  1. Stereotypes of age differences in personality traits: universal and accurate?

    PubMed

    Chan, Wayne; McCrae, Robert R; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E; De Bolle, Marleen; Costa, Paul T; Sutin, Angelina R; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Graf, Sylvie; Yik, Michelle; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-Kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R; Crawford, Jarret T; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Marušić, Iris; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A; Gheorghiu, Mirona; Smith, Peter B; Barbaranelli, Claudio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V; Pramila, V S; Terracciano, Antonio

    2012-12-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. PMID:23088227

  2. Stereotypes of Age Differences in Personality Traits: Universal and Accurate?

    PubMed Central

    Chan, Wayne; McCrae, Robert R.; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E.; De Bolle, Marleen; Costa, Paul T.; Sutin, Angelina R.; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Kourilova, Sylvie; Yik, Michelle; Ficková, Emília; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E.; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R.; Crawford, Jarret T.; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R.; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A.; Gheorghiu, Mirona; Smith, Peter B.; Barbaranelli, Claduio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P.; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V.; Pramila, V. S.; Terracciano, Antonio

    2012-01-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. PMID:23088227

  3. CLOMP: Accurately Characterizing OpenMP Application Overheads

    SciTech Connect

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B

    2008-02-11

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems.

  4. Identifying parameters to describe local land-atmosphere coupling

    NASA Astrophysics Data System (ADS)

    Ek, M. B.; Jacobs, C. M.; Santanello, J. A.; Tuinenburg, O.

    2009-12-01

    The Global Energy and Water Cycle Experiment (GEWEX) Land-Atmosphere System Study / Local Coupling (GLASS/LoCo) project seeks to understand the role of local land-atmosphere coupling in the evolution of surface fluxes and boundary layer state variables including clouds. The theme of land-atmosphere interaction is a research area that is rapidly developing; after the well-known GLACE experiments and various diagnostic studies, new research has evolved in modeling and observing the degree of land-atmosphere coupling on local scales. Questions of interest are (1) how much is coupling related to local versus "remote" processes, (2) what is the nature and strength of coupling, and (3) how does this change (e.g. for different temporal and spatial scales, geographic regions, and changing climates). As such, this is an important issue on both weather and climate time scales. The GLASS/LoCo working group is investigating diagnostics to quantify land-atmosphere coupling. Coupling parameters include the roles of soil moisture and surface evaporative fraction as well as the evolving atmospheric boundary layer and boundary-layer entrainment. After suitable diagnostic parameters are identified, observational data and output from weather and climate models will be used to "map" land-atmosphere coupling in regards to (1)-(3) above.

  5. Beyond Rainfall Multipliers: Describing Input Uncertainty as an Autocorrelated Stochastic Process Improves Inference in Hydrology

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Albert, C.; Reichert, P.; Rieckermann, J.

    2015-12-01

    Rainfall is the main driver of hydrological systems. Unfortunately, it is highly variable in space and time and therefore difficult to observe accurately. This poses a serious challenge to correctly estimate the catchment-averaged precipitation, a key factor for hydrological models. As biased precipitation leads to biased parameter estimation and thus to biased runoff predictions, it is very important to have a realistic description of precipitation uncertainty. Rainfall multipliers (RM), which correct each observed storm with a random factor, provide a first step into this direction. Nevertheless, they often fail when the estimated input has a different temporal pattern from the true one or when a storm is not detected by the raingauge. In this study we propose a more realistic input error model, which is able to overcome these challenges and increase our certainty by better estimating model input and parameters. We formulate the average precipitation over the watershed as a stochastic input process (SIP). We suggest a transformed Gauss-Markov process, which is estimated in a Bayesian framework by using input (rainfall) and output (runoff) data. We tested the methodology in a 28.6 ha urban catchment represented by an accurate conceptual model. Specifically, we perform calibration and predictions with SIP and RM using accurate data from nearby raingauges (R1) and inaccurate data from a distant gauge (R2). Results show that using SIP, the estimated model parameters are "protected" from the corrupting impact of inaccurate rainfall. Additionally, SIP can correct input biases during calibration (Figure) and reliably quantify rainfall and runoff uncertainties during both calibration (Figure) and validation. In our real-word application with non-trivial rainfall errors, this was not the case with RM. We therefore recommend SIP in all cases where the input is the predominant source of uncertainty. Furthermore, the high-resolution rainfall intensities obtained with this

  6. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available. PMID:27013261

  7. Accurate water maser positions from HOPS

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew J.; Purcell, Cormac R.; Longmore, Steven N.; Breen, Shari L.; Green, James A.; Harvey-Smith, Lisa; Jordan, Christopher H.; Macpherson, Christopher

    2014-08-01

    We report on high spatial resolution water maser observations, using the Australia Telescope Compact Array, towards water maser sites previously identified in the H2O southern Galactic Plane Survey (HOPS). Of the 540 masers identified in the single-dish observations of Walsh et al., we detect emission in all but 31 fields. We report on 2790 spectral features (maser spots), with brightnesses ranging from 0.06 to 576 Jy and with velocities ranging from -238.5 to +300.5 km s-1. These spectral features are grouped into 631 maser sites. We have compared the positions of these sites to the literature to associate the sites with astrophysical objects. We identify 433 (69 per cent) with star formation, 121 (19 per cent) with evolved stars and 77 (12 per cent) as unknown. We find that maser sites associated with evolved stars tend to have more maser spots and have smaller angular sizes than those associated with star formation. We present evidence that maser sites associated with evolved stars show an increased likelihood of having a velocity range between 15 and 35 km s-1 compared to other maser sites. Of the 31 non-detections, we conclude they were not detected due to intrinsic variability and confirm previous results showing that such variable masers tend to be weaker and have simpler spectra with fewer peaks.

  8. A simple polymeric model describes cell nuclear mechanical response

    NASA Astrophysics Data System (ADS)

    Banigan, Edward; Stephens, Andrew; Marko, John

    The cell nucleus must continually resist inter- and intracellular mechanical forces, and proper mechanical response is essential to basic cell biological functions as diverse as migration, differentiation, and gene regulation. Experiments probing nuclear mechanics reveal that the nucleus stiffens under strain, leading to two characteristic regimes of force response. This behavior depends sensitively on the intermediate filament protein lamin A, which comprises the outer layer of the nucleus, and the properties of the chromatin interior. To understand these mechanics, we study a simulation model of a polymeric shell encapsulating a semiflexible polymer. This minimalistic model qualitatively captures the typical experimental nuclear force-extension relation and observed nuclear morphologies. Using a Flory-like theory, we explain the simulation results and mathematically estimate the force-extension relation. The model and experiments suggest that chromatin organization is a dominant contributor to nuclear mechanics, while the lamina protects cell nuclei from large deformations.

  9. Describing Blazhko light curves with almost periodic functions

    NASA Astrophysics Data System (ADS)

    Benko, J. M.; Szabo, R.

    2016-05-01

    Recent results of photometric space missions such as CoRoT and Kepler showed that the cycle-to-cycle variations of the Blazhko modulation is very frequent. These variations have either multiperiodic or irregular (chaotic/stochastic) nature. We present a mathematical framework in which all of these variations can be handled. We applied the theory of band-limited almost periodic functions to the modulated RR Lyrae light curves. It yields several interesting results: e.g. the harmonics in the Fourier representation of these functions are not exact multiplets of the base frequency or the modulation function depends on the harmonics. Such phenomena are reported for observed RR Lyrae stars as well showing that the almost periodic functions are promising in the mathematical description of the Blazhko RR Lyrae light curves.

  10. Describing Changes in Undergraduate Students' Preconceptions of Research Activities

    NASA Astrophysics Data System (ADS)

    Cartrette, David P.; Melroe-Lehrman, Bethany M.

    2012-12-01

    Research has shown that students bring naïve scientific conceptions to learning situations which are often incongruous with accepted scientific explanations. These preconceptions are frequently determined to be misconceptions; consequentially instructors spend time to remedy these beliefs and bring students' understanding of scientific concepts to acceptable levels. It is reasonable to assume that students also maintain preconceptions about the processes of authentic scientific research and its associated activities. This study describes the most commonly held preconceptions of authentic research activities among students with little or no previous research experience. Seventeen undergraduate science majors who participated in a ten week research program discussed, at various times during the program, their preconceptions of research and how these ideas changed as a result of direct participation in authentic research activities. The preconceptions included the belief that authentic research is a solitary activity which most closely resembles the type of activity associated with laboratory courses in the undergraduate curriculum. Participants' views showed slight maturation over the research program; they came to understand that authentic research is a detail-oriented activity which is rarely successfully completed alone. These findings and their implications for the teaching and research communities are discussed in the article.

  11. INCAS: an analytical model to describe displacement cascades

    NASA Astrophysics Data System (ADS)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  12. Folding superfunnel to describe cooperative folding of interacting proteins.

    PubMed

    Smeller, László

    2016-07-01

    This paper proposes a generalization of the well-known folding funnel concept of proteins. In the funnel model the polypeptide chain is treated as an individual object not interacting with other proteins. Since biological systems are considerably crowded, protein-protein interaction is a fundamental feature during the life cycle of proteins. The folding superfunnel proposed here describes the folding process of interacting proteins in various situations. The first example discussed is the folding of the freshly synthesized protein with the aid of chaperones. Another important aspect of protein-protein interactions is the folding of the recently characterized intrinsically disordered proteins, where binding to target proteins plays a crucial role in the completion of the folding process. The third scenario where the folding superfunnel is used is the formation of aggregates from destabilized proteins, which is an important factor in case of several conformational diseases. The folding superfunnel constructed here with the minimal assumption about the interaction potential explains all three cases mentioned above. Proteins 2016; 84:1009-1016. © 2016 Wiley Periodicals, Inc. PMID:27090200

  13. Describing the Breakbone Fever: IDODEN, an Ontology for Dengue Fever

    PubMed Central

    Mitraka, Elvira; Topalis, Pantelis; Dritsou, Vicky; Dialynas, Emmanuel; Louis, Christos

    2015-01-01

    Background Ontologies represent powerful tools in information technology because they enhance interoperability and facilitate, among other things, the construction of optimized search engines. To address the need to expand the toolbox available for the control and prevention of vector-borne diseases we embarked on the construction of specific ontologies. We present here IDODEN, an ontology that describes dengue fever, one of the globally most important diseases that are transmitted by mosquitoes. Methodology/Principal Findings We constructed IDODEN using open source software, and modeled it on IDOMAL, the malaria ontology developed previously. IDODEN covers all aspects of dengue fever, such as disease biology, epidemiology and clinical features. Moreover, it covers all facets of dengue entomology. IDODEN, which is freely available, can now be used for the annotation of dengue-related data and, in addition to its use for modeling, it can be utilized for the construction of other dedicated IT tools such as decision support systems. Conclusions/Significance The availability of the dengue ontology will enable databases hosting dengue-associated data and decision-support systems for that disease to perform most efficiently and to link their own data to those stored in other independent repositories, in an architecture- and software-independent manner. PMID:25646954

  14. Transformationally Describing Halo Bias and Exposing Cosmological Information

    NASA Astrophysics Data System (ADS)

    Neyrinck, Mark C.; Aragon-Calvo, M.; Jeong, D.; Wang, X.

    2014-01-01

    Local density transforms have many uses in large-scale structure. If a logarithm is applied to the matter density field, the statistics are much better-behaved (covariances are reduced), and redshift-space distortions even become easier to model. Also, the biasing of haloes compared to matter is well-described by local transforms, even deeply into voids. For the first time, we cleanly resolve an exponential suppression of halo formation in voids, which is well-fit by the excursion-set model. A void is like a local low-density (open) universe, where fluctuations are suppressed. So forming a galaxy inside a void is as rare as forming a rich cluster in a high-density region. What enables this measurement is the MIP ensemble of N-body simulations, in which halo discreteness, exclusion, and stochasticity are made negligible by stacking hundreds of simulations with the same large-scale cosmic web, but which differ on small scales, i.e. in the way the cosmic web is populated with haloes.

  15. Describing current and potential markets for alternative-fuel vehicles

    SciTech Connect

    1996-03-26

    Motor vehicles are a major source of greenhouse gases, and the rising numbers of motor vehicles and miles driven could lead to more harmful emissions that may ultimately affect the world`s climate. One approach to curtailing such emissions is to use, instead of gasoline, alternative fuels: LPG, compressed natural gas, or alcohol fuels. In addition to the greenhouse gases, pollutants can be harmful to human health: ozone, CO. The Clean Air Act Amendments of 1990 authorized EPA to set National Ambient Air Quality Standards to control this. The Energy Policy Act of 1992 (EPACT) was the first new law to emphasize strengthened energy security and decreased reliance on foreign oil since the oil shortages of the 1970`s. EPACT emphasized increasing the number of alternative-fuel vehicles (AFV`s) by mandating their incremental increase of use by Federal, state, and alternative fuel provider fleets over the new few years. Its goals are far from being met; alternative fuels` share remains trivial, about 0.3%, despite gains. This report describes current and potential markets for AFV`s; it begins by assessing the total vehicle stock, and then it focuses on current use of AFV`s in alternative fuel provider fleets and the potential for use of AFV`s in US households.

  16. Jan Evangelista Purkynje (1787-1869): first to describe fingerprints.

    PubMed

    Grzybowski, Andrzej; Pietrzak, Krzysztof

    2015-01-01

    Fingerprints have been used for years as the accepted tool in criminology and for identification. The first system of classification of fingerprints was introduced by Jan Evangelista Purkynje (1787-1869), a Czech physiologist, in 1823. He divided the papillary lines into nine types, based on their geometric arrangement. This work, however, was not recognized internationally for many years. In 1858, Sir William Herschel (1833-1917) registered fingerprints for those signing documents at the Indian magistrate's office in Jungipoor. Henry Faulds (1843-1930) in 1880 proposed using ink for fingerprint determination and people identification, and Francis Galton (1822-1911) collected 8000 fingerprints and developed their classification based on the spirals, loops, and arches. In 1892, Juan Vucetich (1858-1925) created his own fingerprint identification system and proved that a woman was responsible for killing two of her sons. In 1896, a London police officer Edward Henry (1850-1931) expanded on earlier systems of classification and used papillary lines to identify criminals; it was his system that was adopted by the forensic world. The work of Jan Evangelista Purkynje (1787-1869) (Figure 1), who in 1823 was the first to describe in detail fingerprints, is almost forgotten. He also established their classification. The year 2013 marked the 190th anniversary of the publication of his work on this topic. Our contribution is an attempt to introduce the reader to this scientist and his discoveries in the field of fingerprint identification. PMID:25530005

  17. Formal verification of digital circuits described in VHDL

    NASA Astrophysics Data System (ADS)

    Salem, Ashraf Mohammed El-Farghly

    1992-01-01

    The formal verification of digital circuits described in VHSIC (very high speed integrated circuit) hardware description language (VHDL) is presented. VHDL is made processable by proof tools. A subset, called P-VHDL, dedicated to the description of combinatorial and synchronous sequential circuits is defined. The semantics of this subset is much simpler than the complete VHDL. The delta delay is replaced by a serialization function, and the time scale is chosen equal to the clock period. The use of the finite state machine as a formal model for the subset became possible. The finite state machine semantics is shown to represent the P-VHDL semantics. Based on this formal model, a proof oriented compiler for P-VHDL is written. A complete denotational semantic for P-VHDL is defined. Three different domains for the three values holders in the language are proposed: the variables, the signals, and the registers. Formal semantics for the VHDL timing constructs are given. The equivalence between these semantics and the VHDL informal operational semantics is proven. It is shown that semantics can form a basis for building a formal timing verifier.

  18. Hierarchical structure analysis describing abnormal base composition of genomes

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhengqing; Liu, Jian-Kun; She, Zhen-Su

    2005-10-01

    Abnormal base compositional patterns of genomic DNA sequences are studied in the framework of a hierarchical structure (HS) model originally proposed for the study of fully developed turbulence [She and Lévêque, Phys. Rev. Lett. 72, 336 (1994)]. The HS similarity law is verified over scales between 103bp and 105bp , and the HS parameter β is proposed to describe the degree of heterogeneity in the base composition patterns. More than one hundred bacteria, archaea, virus, yeast, and human genome sequences have been analyzed and the results show that the HS analysis efficiently captures abnormal base composition patterns, and the parameter β is a characteristic measure of the genome. Detailed examination of the values of β reveals an intriguing link to the evolutionary events of genetic material transfer. Finally, a sequence complexity (S) measure is proposed to characterize gradual increase of organizational complexity of the genome during the evolution. The present study raises several interesting issues in the evolutionary history of genomes.

  19. Canonical quantization of a string describing N branes at angles

    NASA Astrophysics Data System (ADS)

    Pesando, Igor

    2014-12-01

    We study the canonical quantization of a bosonic string in presence of N twist fields. This generalizes the quantization of the twisted string in two ways: the in and out states are not necessarily twisted and the number of twist fields N can be bigger than 2. In order to quantize the theory we need to find the normal modes. Then we need to define a product between two modes which is conserved. Because of this we need to use the Klein-Gordon product and to separate the string coordinate into the classical and the quantum part. The quantum part has different boundary conditions than the original string coordinates but these boundary conditions are precisely those which make the operator describing the equation of motion self adjoint. The splitting of the string coordinates into a classical and quantum part allows the formulation of an improved overlap principle. Using this approach we then proceed in computing the generating function for the generic correlator with L untwisted operators and N (excited) twist fields for branes at angles. We recover as expected the results previously obtained using the path integral. This construction explains why these correlators are given by a generalization of the Wick theorem.

  20. Conceptual hierarchical modeling to describe wetland plant community organization

    USGS Publications Warehouse

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  1. A hybrid model describing ion induced kinetic electron emission

    NASA Astrophysics Data System (ADS)

    Hanke, S.; Duvenbeck, A.; Heuser, C.; Weidtmann, B.; Wucher, A.

    2015-06-01

    We present a model to describe the kinetic internal and external electron emission from an ion bombarded metal target. The model is based upon a molecular dynamics treatment of the nuclear degree of freedom, the electronic system is assumed as a quasi-free electron gas characterized by its Fermi energy, electron temperature and a characteristic attenuation length. In a series of previous works we have employed this model, which includes the local kinetic excitation as well as the rapid spread of the generated excitation energy, in order to calculate internal and external electron emission yields within the framework of a Richardson-Dushman-like thermionic emission model. However, this kind of treatment turned out to fail in the realistic prediction of experimentally measured internal electron yields mainly due to the restriction of the treatment of electronic transport to a diffusive manner. Here, we propose a slightly modified approach additionally incorporating the contribution of hot electrons which are generated in the bulk material and undergo ballistic transport towards the emitting interface.

  2. An ontological approach to describing neurons and their relationships

    PubMed Central

    Hamilton, David J.; Shepherd, Gordon M.; Martone, Maryann E.; Ascoli, Giorgio A.

    2012-01-01

    The advancement of neuroscience, perhaps one of the most information rich disciplines of all the life sciences, requires basic frameworks for organizing the vast amounts of data generated by the research community to promote novel insights and integrated understanding. Since Cajal, the neuron remains a fundamental unit of the nervous system, yet even with the explosion of information technology, we still have few comprehensive or systematic strategies for aggregating cell-level knowledge. Progress toward this goal is hampered by the multiplicity of names for cells and by lack of a consensus on the criteria for defining neuron types. However, through umbrella projects like the Neuroscience Information Framework (NIF) and the International Neuroinformatics Coordinating Facility (INCF), we have the opportunity to propose and implement an informatics infrastructure for establishing common tools and approaches to describe neurons through a standard terminology for nerve cells and a database (a Neuron Registry) where these descriptions can be deposited and compared. This article provides an overview of the problem and outlines a solution approach utilizing ontological characterizations. Based on illustrative implementation examples, we also discuss the need for consensus criteria to be adopted by the research community, and considerations on future developments. A scalable repository of neuron types will provide researchers with a resource that materially contributes to the advancement of neuroscience. PMID:22557965

  3. The new product fAPARchl is better than fAPARcanopy to describe terrestrial ecosystem photosynthesis (GPP)

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Middleton, E.; Cheng, Y.; Wei, J.

    2011-12-01

    Existing global climate models have been unable to accurately describe the intensity of photosynthetic activity or to discriminate this functionality among terrestrial vegetation canopies/ecosystems. Many satellite-based production efficiency models (PEMs), land-atmosphere interaction models and biogeochemical models (e.g., SiB, CLM and CASA) have used the concept of the fraction of photosynthetically active radiation (PAR) absorbed for vegetation photosynthesis (fAPARPSN) in their modeling work. These models typically use fAPAR for the whole canopy (fAPARcanopy) (usually denoted as FPAR or fAPAR) to represent fAPARPSN. However, this widely used FPAR parameter has proved to be physiologically insufficient to describe or retrieve terrestrial ecosystem photosynthesis. A much better alternative is to utilize the fraction of PAR absorbed by chlorophyll throughout a canopy/ecosystem (i.e., fAPARchl) to replace FPAR in these calculations. In this study, we present examples of fAPARchl, leaf fAPARNPV (the non-photosynthetic canopy fraction, without chlorophyll) and fAPARcanopy at 30 m spatial resolution for deciduous forests, evergreen forests and crops, obtained from Earth Observing One (EO-1) Hyperion satellite imagery. The differences obtained between fAPARchl and fAPARcanopy are significant for all of these vegetation types across the whole growing season. For instance, for the evergreen forests, fAPARchl changes seasonally, whereas the seasonal trend for fAPARcanopy is flat. Consequently, these differences translate into significant differences in estimates of fAPARPSN. We suggest modeling scientists should compare simulation outputs using fAPARcanopy versus fAPARchl, to check whether the differences are significant.

  4. Quantitative metrics that describe river deltas and their channel networks

    NASA Astrophysics Data System (ADS)

    Edmonds, Douglas A.; Paola, Chris; Hoyal, David C. J. D.; Sheets, Ben A.

    2011-12-01

    Densely populated river deltas are losing land at an alarming rate and to successfully restore these environments we must understand the details of their morphology. Toward this end we present a set of five metrics that describe delta morphology: (1) the fractal dimension, (2) the distribution of island sizes, (3) the nearest-edge distance, (4) a synthetic distribution of sediment fluxes at the shoreline, and (5) the nourishment area. The nearest-edge distance is the shortest distance to channelized or unchannelized water from a given location on the delta and is analogous to the inverse of drainage density in tributary networks. The nourishment area is the downstream delta area supplied by the sediment coming through a given channel cross section and is analogous to catchment area in tributary networks. As a first step, we apply these metrics to four relatively simple, fluvially dominated delta networks. For all these deltas, the average nearest-edge distances are remarkably constant moving down delta suggesting that the network organizes itself to maintain a consistent distance to the nearest channel. Nourishment area distributions can be predicted from a river mouth bar model of delta growth, and also scale with the width of the channel and with the length of the longest channel, analogous to Hack's law for drainage basins. The four delta channel networks are fractal, but power laws and scale invariance appear to be less pervasive than in tributary networks. Thus, deltas may occupy an advantageous middle ground between complete similarity and complete dissimilarity, where morphologic differences indicate different behavior.

  5. Probabilistic models to describe the dynamics of migrating microbial communities.

    PubMed

    Schroeder, Joanna L; Lunn, Mary; Pinto, Ameet J; Raskin, Lutgarde; Sloan, William T

    2015-01-01

    In all but the most sterile environments bacteria will reside in fluid being transported through conduits and some of these will attach and grow as biofilms on the conduit walls. The concentration and diversity of bacteria in the fluid at the point of delivery will be a mix of those when it entered the conduit and those that have become entrained into the flow due to seeding from biofilms. Examples include fluids through conduits such as drinking water pipe networks, endotracheal tubes, catheters and ventilation systems. Here we present two probabilistic models to describe changes in the composition of bulk fluid microbial communities as they are transported through a conduit whilst exposed to biofilm communities. The first (discrete) model simulates absolute numbers of individual cells, whereas the other (continuous) model simulates the relative abundance of taxa in the bulk fluid. The discrete model is founded on a birth-death process whereby the community changes one individual at a time and the numbers of cells in the system can vary. The continuous model is a stochastic differential equation derived from the discrete model and can also accommodate changes in the carrying capacity of the bulk fluid. These models provide a novel Lagrangian framework to investigate and predict the dynamics of migrating microbial communities. In this paper we compare the two models, discuss their merits, possible applications and present simulation results in the context of drinking water distribution systems. Our results provide novel insight into the effects of stochastic dynamics on the composition of non-stationary microbial communities that are exposed to biofilms and provides a new avenue for modelling microbial dynamics in systems where fluids are being transported. PMID:25803866

  6. Autopathography and depression: describing the 'despair beyond despair'.

    PubMed

    Moran, Stephen T

    2006-01-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, emphasizes diagnosis and statistically significant commonalities in mental disorders. As stated in the Introduction, "[i]t must be admitted that no definition adequately specifies precise boundaries for the concept of 'mental disorder' " (DSM-IV, 1994, xxi). Further, "[t]he clinician using DSM-IV should ... consider that individuals sharing a diagnosis are likely to be heterogeneous, even in regard to the defining features of the diagnosis, and that boundary cases will be difficult to diagnose in any but a probabilistic fashion" (DSM-IV, 1994, xxii). This article proposes that it may be helpful for clinicians to study narratives of illness which emphasize this heterogeneity over statistically significant symptoms. This paper examines the recorded experiences of unusually articulate sufferers of the disorder classified as Major Depression. Although sharing a diagnosis, Hemingway, Fitzgerald, and Styron demonstrated different understandings of their illness and its symptoms and experienced different resolutions, which may have had something to do with the differing meanings they made of it. I have proposed a word, autopathography, to describe a type of literature in which the author's illness is the primary lens through which the narrative is filtered. This word is an augmentation of an existing word, pathography, which The Oxford English Dictionary, Second Edition, defines as "a) [t]he, or a, description of a disease," and "b) [t]he, or a, study of the life and character of an individual or community as influenced by a disease." The second definition is the one that I find relevant and which I feel may be helpful to clinicians in broadening their understanding of the patient's experience. PMID:16721676

  7. Sensitivity analysis approach to multibody systems described by natural coordinates

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2014-03-01

    The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.

  8. Accurate skin dose measurements using radiochromic film in clinical applications

    SciTech Connect

    Devic, S.; Seuntjens, J.; Abdel-Rahman, W.; Evans, M.; Olivares, M.; Podgorsak, E.B.; Vuong, Te; Soares, Christopher G.

    2006-04-15

    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 {mu}. We used the new GAFCHROMIC[reg] dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 {mu}. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 {mu} to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10x10 cm{sup 2} increases from 14% to 43%. For the three GAFCHROMIC[reg] dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC[reg] films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC[reg] film model. Finally, a procedure that uses EBT model GAFCHROMIC[reg] film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  9. A Biophysical Neural Model To Describe Spatial Visual Attention

    NASA Astrophysics Data System (ADS)

    Hugues, Etienne; José, Jorge V.

    2008-02-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.

  10. Describing Directional Cell Migration with a Characteristic Directionality Time

    PubMed Central

    Loosley, Alex J.; O’Brien, Xian M.; Reichner, Jonathan S.; Tang, Jay X.

    2015-01-01

    Many cell types can bias their direction of locomotion by coupling to external cues. Characteristics such as how fast a cell migrates and the directedness of its migration path can be quantified to provide metrics that determine which biochemical and biomechanical factors affect directional cell migration, and by how much. To be useful, these metrics must be reproducible from one experimental setting to another. However, most are not reproducible because their numerical values depend on technical parameters like sampling interval and measurement error. To address the need for a reproducible metric, we analytically derive a metric called directionality time, the minimum observation time required to identify motion as directionally biased. We show that the corresponding fit function is applicable to a variety of ergodic, directionally biased motions. A motion is ergodic when the underlying dynamical properties such as speed or directional bias do not change over time. Measuring the directionality of nonergodic motion is less straightforward but we also show how this class of motion can be analyzed. Simulations are used to show the robustness of directionality time measurements and its decoupling from measurement errors. As a practical example, we demonstrate the measurement of directionality time, step-by-step, on noisy, nonergodic trajectories of chemotactic neutrophils. Because of its inherent generality, directionality time ought to be useful for characterizing a broad range of motions including intracellular transport, cell motility, and animal migration. PMID:25992908

  11. Structural arrest transitions in fluids described by two Yukawa potentials

    NASA Astrophysics Data System (ADS)

    Wu, Jianlan; Liu, Yun; Chen, Wei-Ren; Cao, Jianshu; Chen, Sow-Hsin

    2004-11-01

    We study a model colloidal system where particles interact via short-range attractive and long-range repulsive Yukawa potentials. Using the structure factor calculated from the mean-spherical approximation as the input, the kinetic phase diagrams as functions of the attraction depth and the volume fraction are obtained by calculating the Debye-Waller factors in the framework of the mode-coupling theory for three different heights of the repulsive barrier. The glass-glass reentrance phenomenon in the attractive colloidal case is also observed in the presence of the long-range repulsive barrier, which results in the lower and upper glass regimes. Competition between the short-range attraction and the long-range repulsion gives rise to new regimes associated with clusters such as “static cluster glass” and “dynamic cluster glass,” which appear in the lower glass regime. Along the liquid-glass transition line between the liquid regime and the lower glass regime, crossover points separating different glass states are identified.

  12. Optimality approaches to describe characteristic fluvial patterns on landscapes

    PubMed Central

    Paik, Kyungrock; Kumar, Praveen

    2010-01-01

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257

  13. A Biophysical Neural Model To Describe Spatial Visual Attention

    SciTech Connect

    Hugues, Etienne; Jose, Jorge V.

    2008-02-14

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.

  14. Statistics of topography : multifractal approach to describe planetary topography

    NASA Astrophysics Data System (ADS)

    Landais, Francois; Schmidt, Frédéric; Lovejoy, Shaun

    2016-04-01

    In the last decades, a huge amount of topographic data has been obtained by several techniques (laser and radar altimetry, DTM…) for different bodies in the solar system. In each case, topographic fields exhibit an extremely high variability with details at each scale, from millimeters to thousands of kilometers. In our study, we investigate the statistical properties of the topography. Our statistical approach is motivated by the well known scaling behavior of topography that has been widely studied in the past. Indeed, scaling laws are strongly present in geophysical field and can be studied using fractal formalism. More precisely, we expect multifractal behavior in global topographic fields. This behavior reflects the high variability and intermittency observed in topographic fields that can not be generated by simple scaling models. In the multifractal formalism, each statistical moment exhibits a different scaling law characterized by a function called the moment scaling function. Previous studies were conducted at regional scale to demonstrate that topography present multifractal statistics (Gagnon et al., 2006, NPG). We have obtained similar results on Mars (Landais et al. 2015) and more recently on different body in the the solar system including the Moon, Venus and Mercury. We present the result of different multifractal approaches performed on global and regional basis and compare the fractal parameters from a body to another.

  15. Noise reduction for modal parameters estimation using algorithm of solving partially described inverse singular value problem

    NASA Astrophysics Data System (ADS)

    Bao, Xingxian; Cao, Aixia; Zhang, Jing

    2016-07-01

    Modal parameters estimation plays an important role for structural health monitoring. Accurately estimating the modal parameters of structures is more challenging as the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of solving the partially described inverse singular value problem (PDISVP) combined with the complex exponential (CE) method to estimate the modal parameters. The PDISVP solving method is to reconstruct an L2-norm optimized (filtered) data matrix from the measured (noisy) data matrix, when the prescribed data constraints are one or several sets of singular triplets of the matrix. The measured data matrix is Hankel structured, which is constructed based on the measured impulse response function (IRF). The reconstructed matrix must maintain the Hankel structure, and be lowered in rank as well. Once the filtered IRF is obtained, the CE method can be applied to extract the modal parameters. Two physical experiments, including a steel cantilever beam with 10 accelerometers mounted, and a steel plate with 30 accelerometers mounted, excited by an impulsive load, respectively, are investigated to test the applicability of the proposed scheme. In addition, the consistency diagram is proposed to exam the agreement among the modal parameters estimated from those different accelerometers. Results indicate that the PDISVP-CE method can significantly remove noise from measured signals and accurately estimate the modal frequencies and damping ratios.

  16. Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure

    PubMed Central

    Skvortsova, Elena B.; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification

  17. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  18. Lattice QCD calculation of form factors describing the rare decays B→K*ℓ+ℓ- and Bs→ϕℓ+ℓ-

    NASA Astrophysics Data System (ADS)

    Horgan, Ronald R.; Liu, Zhaofeng; Meinel, Stefan; Wingate, Matthew

    2014-05-01

    The rare decays B0→K*0μ+μ- and Bs→ϕμ+μ- are now being observed with enough precision to test Standard Model predictions. A full understanding of these decays requires accurate determinations of the corresponding hadronic form factors. Here we present results of lattice QCD calculations of the B→K* and Bs→ϕ form factors. We also determine the form factors relevant for the decays Bs→K*ℓν and Bs→K¯*0ℓ+ℓ-. We use full-QCD configurations including 2+1 flavors of sea quarks using an improved staggered action, and we employ lattice nonrelativistic QCD to describe the bottom quark.

  19. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    PubMed Central

    2011-01-01

    Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645

  20. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  1. Identifying and Describing Tutor Archetypes: The Pragmatist, the Architect, and the Surveyor

    ERIC Educational Resources Information Center

    Harootunian, Jeff A.; Quinn, Robert J.

    2008-01-01

    In this article, the authors identify and anecdotally describe three tutor archetypes: the pragmatist, the architect, and the surveyor. These descriptions, based on observations of remedial mathematics tutors at a land-grant university, shed light on a variety of philosophical beliefs regarding and pedagogical approaches to tutoring. An analysis…

  2. Producing Accurate Stereographic Images with a Flashlight and Layers of Glass: A Source for Stereopsis via Slides or Overhead Projection.

    ERIC Educational Resources Information Center

    Strauss, Michael J.; Levine, Shellie H.

    1985-01-01

    Describes an extremely simple technique (using only Dreiding or Framework molecular models, a flashlight, small sheets of glass, and a piece of cardboard) which produces extremely accurate line drawings of stereoscopic images. Advantages of using the system are noted. (JN)

  3. Finite volume approach for the instationary Cosserat rod model describing the spinning of viscous jets

    NASA Astrophysics Data System (ADS)

    Arne, Walter; Marheineke, Nicole; Meister, Andreas; Schiessl, Stefan; Wegener, Raimund

    2015-08-01

    The spinning of slender viscous jets can be asymptotically described by one-dimensional models that consist of systems of partial and ordinary differential equations. Whereas well-established string models only possess solutions for certain choices of parameters and configurations, the more sophisticated rod model is not limited by restrictions. It can be considered as an ɛ-regularized string model, but containing the slenderness ratio ɛ in the equations complicates its numerical treatment. We develop numerical schemes for fixed or enlarging (time-dependent) domains, using a finite volume approach in space with mixed central, up- and down-winded differences and stiffly accurate Radau methods for the time integration. For the first time, results of instationary simulations for a fixed or growing jet in a rotational spinning process are presented for arbitrary parameter ranges.

  4. Angoricity and compactivity describe the jamming transition in soft particulate matter

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Song, Chaoming; Wang, Ping; Makse, Hernán A.

    2010-09-01

    The application of concepts from equilibrium statistical mechanics to out-of-equilibrium systems has a long history of describing diverse systems ranging from glasses to granular materials. For dissipative jammed systems —particulate grains or droplets— a key concept is to replace the energy ensemble describing conservative systems by the volume-stress ensemble. Here, we test the applicability of the volume-stress ensemble to describe the jamming transition by comparing the jammed configurations obtained by dynamics with those averaged over the ensemble as a probe of ergodicity. Agreement between both methods suggests the idea of "thermalization" at a given angoricity and compactivity. We elucidate the thermodynamic order of the jamming transition by showing the absence of critical fluctuations in static observables like pressure and volume. The approach allows to calculate observables such as the entropy, volume, pressure, coordination number and distribution of forces to characterize the scaling laws near the jamming transition from a statistical mechanics viewpoint.

  5. The Laboratory Parenting Assessment Battery: Development and Preliminary Validation of an Observational Parenting Rating System

    ERIC Educational Resources Information Center

    Wilson, Sylia; Durbin, C. Emily

    2012-01-01

    Investigations of contributors to and consequences of the parent-child relationship require accurate assessment of the nature and quality of parenting. The present study describes the development and psychometric evaluation of the Laboratory Parenting Assessment Battery (Lab-PAB), an observational rating system that assesses parenting behaviors…

  6. A General Pairwise Interaction Model Provides an Accurate Description of In Vivo Transcription Factor Binding Sites

    PubMed Central

    Santolini, Marc; Mora, Thierry; Hakim, Vincent

    2014-01-01

    The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond

  7. Detailed observations of the source of terrestrial narrowband electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.

    1982-01-01

    Detailed observations are presented of a region near the terrestrial plasmapause where narrowband electromagnetic radiation (previously called escaping nonthermal continuum radiation) is being generated. These observations show a direct correspondence between the narrowband radio emissions and electron cyclotron harmonic waves near the upper hybrid resonance frequency. In addition, electromagnetic radiation propagating in the Z-mode is observed in the source region which provides an extremely accurate determination of the electron plasma frequency and, hence, density profile of the source region. The data strongly suggest that electrostatic waves and not Cerenkov radiation are the source of the banded radio emissions and define the coupling which must be described by any viable theory.

  8. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  9. Scallops skeletons as tools for accurate proxy calibration

    NASA Astrophysics Data System (ADS)

    Lorrain, A.; Paulet, Y.-M.; Chauvaud, L.; Dunbar, R.; Mucciarone, D.; Pécheyran, C.; Amouroux, D.; Fontugne, M.

    2003-04-01

    Bivalves skeletons are able to produce great geochemical proxies. But general calibration of those proxies are based on approximate time basis because of misunderstanding of growth rhythm. In this context, the Great scallop, Pecten maximus, appears to be a powerful tool as a daily growth deposit has been clearly identified for this species (Chauvaud et al, 1998; Lorrain et al, 2000), allowing accurate environmental calibration. Indeed, using this species, a date can be affiliated to each growth increment, and as a consequence environmental parameters can be closely compared (at a daily scale) to observed chemical and structural shell variations. This daily record provides an unequivocal basis to calibrate proxies. Isotopic (Delta-13C and Delta-15N) and trace element analysis (LA-ICP-MS) have been performed on several individuals and different years depending on the analysed parameter. Seawater parameters measured one meter above the sea-bottom were compared to chemical variations in the calcitic shell. Their confrontation showed that even with a daily basis for data interpretation, calibration is still a challenge. Inter-individual variations are found and correlations are not always reproducible from one year to the others. The first explanation could be an inaccurate appreciation of the proximate environment of the animal, notably the water-sediment interface could best represent Pecten maximus environment. Secondly, physiological parameters could be inferred for those discrepancies. In particular, calcification takes places in the extrapallial fluid, which composition might be very different from external environment. Accurate calibration of chemical proxies should consider biological aspects to gain better insights into the processes controlling the incorporation of those chemical elements. The characterisation of isotopic and trace element composition of the extrapallial fluid and hemolymph could greatly help our understanding of chemical shell variations.

  10. Onboard Autonomous Corrections for Accurate IRF Pointing.

    NASA Astrophysics Data System (ADS)

    Jorgensen, J. L.; Betto, M.; Denver, T.

    2002-05-01

    filtered GPS updates, a world time clock, astrometric correction tables, and a attitude output transform system, that allow the ASC to deliver the spacecraft attitude relative to the Inertial Reference Frame (IRF) in realtime. This paper describes the operations of the onboard autonomy of the ASC, which in realtime removes the residuals from the attitude measurements, whereby a timely IRF attitude at arcsecond level, is delivered to the AOCS (or sent to ground). A discussion about achievable robustness and accuracy is given, and compared to inflight results from the operations of the two Advanced Stellar Compass's (ASC), which are flying in LEO onboard the German geo-potential research satellite CHAMP. The ASC's onboard CHAMP are dual head versions, i.e. each processing unit is attached to two star camera heads. The dual head configuration is primarily employed to achieve a carefree AOCS control with respect to the Sun, Moon and Earth, and to increase the attitude accuracy, but it also enables onboard estimation and removal of thermal generated biases.

  11. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    SciTech Connect

    Perley, R. A.; Butler, B. J. E-mail: BButler@nrao.edu

    2013-02-15

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  12. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  13. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    ERIC Educational Resources Information Center

    Deliyski, Dimitar D.; Hillman, Robert E.; Mehta, Daryush D.

    2015-01-01

    Purpose: The authors discuss the rationale behind the term "laryngeal high-speed videoendoscopy" to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method: Commentary on the advantages of using accurate and consistent terminology in the field of voice research is…

  14. Accurate Delayed Matching-to-Sample Responding without Rehearsal: An Unintentional Demonstration with Children.

    PubMed

    Ratkos, Thom; Frieder, Jessica E; Poling, Alan

    2016-06-01

    Research on joint control has focused on mediational responses, in which simultaneous stimulus control from two sources leads to the emission of a single response, such as choosing a comparison stimulus in delayed matching-to-sample. Most recent studies of joint control examined the role of verbal mediators (i.e., rehearsal) in evoking accurate performance. They suggest that mediation is a necessity for accurate delayed matching-to-sample responding. We designed an experiment to establish covert rehearsal responses in young children. Before participants were taught such responses; however, we observed that they responded accurately at delays of 15 and 30 s without overt rehearsal. These findings suggest that in some cases, rehearsal is not necessary for accurate responding in such tasks. PMID:27606223

  15. Using logistic regression to describe the length of breastfeeding: a study in Guadalajara, Mexico.

    PubMed

    Gonzalez-Perez, G J; Vega-Lopez, M G; Cabrera-Pivaral, C

    1998-12-01

    This study seeks, through a logistic regression model, to describe the pattern of breastfeeding duration in Guadalajara, Mexico, during 1993. A multistage random sample of children under 1 year of age (n = 1036) was studied; observational data regarding breastfeeding duration, obtained through a "status quo" procedure, were compared with prevalence rates obtained from the logistic regression model. Modeling the duration of breastfeeding during the first year of life rather than only analyzing observational data helps researchers to understand this process in a dynamic and quantitative way. For example, uncommon indicators of breastfeeding were derived from the model. These indicators are impossible to obtain from observational data. The prevalence curve estimated through the logistic model was adequately fitted to observed data: there were no significant differences between the number or distribution of breastfed infants observed and those predicted by the model. Moreover, the model revealed that less than 40% of the children were breastfed in the fourth month of life; the median age for weaning was 39.3 days; 55% of the potential breastfeeding in the first 4 months did not occur; and the greatest abandonment of breastfeeding in the first 4 months was observed in the first 60 days. Thus, logistic regression seems a suitable option to construct a population-based model that describes breastfeeding duration during the first year of life. The indicators derived from the model offer health care providers valuable information for developing programs that promote breastfeeding. PMID:10205448

  16. Accurate deterministic solutions for the classic Boltzmann shock profile

    NASA Astrophysics Data System (ADS)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  17. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  18. Accurate mass spectrometry based protein quantification via shared peptides.

    PubMed

    Dost, Banu; Bandeira, Nuno; Li, Xiangqian; Shen, Zhouxin; Briggs, Steven P; Bafna, Vineet

    2012-04-01

    In mass spectrometry-based protein quantification, peptides that are shared across different protein sequences are often discarded as being uninformative with respect to each of the parent proteins. We investigate the use of shared peptides which are ubiquitous (~50% of peptides) in mass spectrometric data-sets for accurate protein identification and quantification. Different from existing approaches, we show how shared peptides can help compute the relative amounts of the proteins that contain them. Also, proteins with no unique peptide in the sample can still be analyzed for relative abundance. Our article uses shared peptides in protein quantification and makes use of combinatorial optimization to reduce the error in relative abundance measurements. We describe the topological and numerical properties required for robust estimates, and use them to improve our estimates for ill-conditioned systems. Extensive simulations validate our approach even in the presence of experimental error. We apply our method to a model of Arabidopsis thaliana root knot nematode infection, and investigate the differential role of several protein family members in mediating host response to the pathogen. PMID:22414154

  19. Slim hole MWD tool accurately measures downhole annular pressure

    SciTech Connect

    Burban, B.; Delahaye, T. )

    1994-02-14

    Measurement-while-drilling of downhole pressure accurately determines annular pressure losses from circulation and drillstring rotation and helps monitor swab and surge pressures during tripping. In early 1993, two slim-hole wells (3.4 in. and 3 in. diameter) were drilled with continuous real-time electromagnetic wave transmission of downhole temperature and annular pressure. The data were obtained during all stages of the drilling operation and proved useful for operations personnel. The use of real-time measurements demonstrated the characteristic hydraulic effects of pressure surges induced by drillstring rotation in the small slim-hole annulus under field conditions. The interest in this information is not restricted to the slim-hole geometry. Monitoring or estimating downhole pressure is a key element for drilling operations. Except in special cases, no real-time measurements of downhole annular pressure during drilling and tripping have been used on an operational basis. The hydraulic effects are significant in conventional-geometry wells (3 1/2-in. drill pipe in a 6-in. hole). This paper describes the tool and the results from the field test.

  20. Accurate transition rates for intercombination lines of singly ionized nitrogen

    SciTech Connect

    Tayal, S. S.

    2011-01-15

    The transition energies and rates for the 2s{sup 2}2p{sup 2} {sup 3}P{sub 1,2}-2s2p{sup 3} {sup 5}S{sub 2}{sup o} and 2s{sup 2}2p3s-2s{sup 2}2p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p{sup 3} {sup 1,3}P{sub 1}{sup o} and 2s{sup 2}2p3s {sup 1,3}P{sub 1}{sup o}levels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  1. Accurate, Automated Detection of Atrial Fibrillation in Ambulatory Recordings.

    PubMed

    Linker, David T

    2016-06-01

    A highly accurate, automated algorithm would facilitate cost-effective screening for asymptomatic atrial fibrillation. This study analyzed a new algorithm and compared it to existing techniques. The incremental benefit of each step in refinement of the algorithm was measured, and the algorithm was compared to other methods using the Physionet atrial fibrillation and normal sinus rhythm databases. When analyzing segments of 21 RR intervals or less, the algorithm had a significantly higher area under the receiver operating characteristic curve (AUC) than the other algorithms tested. At analysis segment sizes of up to 101 RR intervals, the algorithm continued to have a higher AUC than any of the other methods tested, although the difference from the second best other algorithm was no longer significant, with an AUC of 0.9992 with a 95% confidence interval (CI) of 0.9986-0.9998, vs. 0.9986 (CI 0.9978-0.9994). With identical per-subject sensitivity, per-subject specificity of the current algorithm was superior to the other tested algorithms even at 101 RR intervals, with no false positives (CI 0.0-0.8%) vs. 5.3% false positives for the second best algorithm (CI 3.4-7.9%). The described algorithm shows great promise for automated screening for atrial fibrillation by reducing false positives requiring manual review, while maintaining high sensitivity. PMID:26850411

  2. How Clean Are Hotel Rooms? Part I: Visual Observations vs. Microbiological Contamination.

    PubMed

    Almanza, Barbara A; Kirsch, Katie; Kline, Sheryl Fried; Sirsat, Sujata; Stroia, Olivia; Choi, Jin Kyung; Neal, Jay

    2015-01-01

    Current evidence of hotel room cleanliness is based on observation rather than empirically based microbial assessment. The purpose of the study described here was to determine if observation provides an accurate indicator of cleanliness. Results demonstrated that visual assessment did not accurately predict microbial contamination. Although testing standards have not yet been established for hotel rooms and will be evaluated in Part II of the authors' study, potential microbial hazards included the sponge and mop (housekeeping cart), toilet, bathroom floor, bathroom sink, and light switch. Hotel managers should increase cleaning in key areas to reduce guest exposure to harmful bacteria. PMID:26427262

  3. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  4. Accurate and reproducible detection of proteins in water using an extended-gate type organic transistor biosensor

    NASA Astrophysics Data System (ADS)

    Minamiki, Tsukuru; Minami, Tsuyoshi; Kurita, Ryoji; Niwa, Osamu; Wakida, Shin-ichi; Fukuda, Kenjiro; Kumaki, Daisuke; Tokito, Shizuo

    2014-06-01

    In this Letter, we describe an accurate antibody detection method using a fabricated extended-gate type organic field-effect-transistor (OFET), which can be operated at below 3 V. The protein-sensing portion of the designed device is the gate electrode functionalized with streptavidin. Streptavidin possesses high molecular recognition ability for biotin, which specifically allows for the detection of biotinylated proteins. Here, we attempted to detect biotinylated immunoglobulin G (IgG) and observed a shift of threshold voltage of the OFET upon the addition of the antibody in an aqueous solution with a competing bovine serum albumin interferent. The detection limit for the biotinylated IgG was 8 nM, which indicates the potential utility of the designed device in healthcare applications.

  5. A particle-tracking approach for accurate material derivative measurements with tomographic PIV

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Scarano, Fulvio

    2013-08-01

    The evaluation of the instantaneous 3D pressure field from tomographic PIV data relies on the accurate estimate of the fluid velocity material derivative, i.e., the velocity time rate of change following a given fluid element. To date, techniques that reconstruct the fluid parcel trajectory from a time sequence of 3D velocity fields obtained with Tomo-PIV have already been introduced. However, an accurate evaluation of the fluid element acceleration requires trajectory reconstruction over a relatively long observation time, which reduces random errors. On the other hand, simple integration and finite difference techniques suffer from increasing truncation errors when complex trajectories need to be reconstructed over a long time interval. In principle, particle-tracking velocimetry techniques (3D-PTV) enable the accurate reconstruction of single particle trajectories over a long observation time. Nevertheless, PTV can be reliably performed only at limited particle image number density due to errors caused by overlapping particles. The particle image density can be substantially increased by use of tomographic PIV. In the present study, a technique to combine the higher information density of tomographic PIV and the accurate trajectory reconstruction of PTV is proposed (Tomo-3D-PTV). The particle-tracking algorithm is applied to the tracers detected in the 3D domain obtained by tomographic reconstruction. The 3D particle information is highly sparse and intersection of trajectories is virtually impossible. As a result, ambiguities in the particle path identification over subsequent recordings are easily avoided. Polynomial fitting functions are introduced that describe the particle position in time with sequences based on several recordings, leading to the reduction in truncation errors for complex trajectories. Moreover, the polynomial regression approach provides a reduction in the random errors due to the particle position measurement. Finally, the acceleration

  6. ALOS-PALSAR multi-temporal observation for describing land use and forest cover changes in Malaysia

    NASA Astrophysics Data System (ADS)

    Avtar, R.; Suzuki, R.; Ishii, R.; Kobayashi, H.; Nagai, S.; Fadaei, H.; Hirata, R.; Suhaili, A. B.

    2012-12-01

    The establishment of plantations in carbon rich peatland of Southeast Asia has shown an increase in the past decade. The need to support development in countries such as Malaysia has been reflected by having a higher rate of conversion of its forested areas to agricultural land use in particular oilpalm plantation. Use of optical data to monitor changes in peatland forests is difficult because of the high cloudiness in tropical region. Synthetic Aperture Radar (SAR) based remote sensing can potentially be used to monitor changes in such forested landscapes. In this study, we have demonstrated the capability of multi-temporal Fine-Beam Dual (FBD) data of Phased Array L-band Synthetic Aperture Radar (PALSAR) to detect forest cover changes in peatland to other landuse such as oilpalm plantation. Here, the backscattering properties of radar were evaluated to estimate changes in the forest cover. Temporal analysis of PALSAR FBD data shows that conversion of peatland forest to oilpalm can be detected by analyzing changes in the value of σoHH and σoHV. This is characterized by a high value of σoHH (-7.89 dB) and σoHV (-12.13 dB) for areas under peat forests. The value of σoHV decreased about 2-4 dB due to the conversion of peatland to a plantation area. There is also an increase in the value of σoHH/σoHV. Changes in σoHV is more prominent to identify the peatland conversion than in the σoHH. The results indicate the potential of PALSAR to estimate peatland forest conversion based on thresholding of σoHV or σoHH/σoHV for monitoring changes in peatland forest. This would improve our understanding of the temporal change and its effect on the peatland forest ecosystem.

  7. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  8. Accurate 3D reconstruction of complex blood vessel geometries from intravascular ultrasound images: in vitro study.

    PubMed

    Subramanian, K R; Thubrikar, M J; Fowler, B; Mostafavi, M T; Funk, M W

    2000-01-01

    We present a technique that accurately reconstructs complex three dimensional blood vessel geometry from 2D intravascular ultrasound (IVUS) images. Biplane x-ray fluoroscopy is used to image the ultrasound catheter tip at a few key points along its path as the catheter is pulled through the blood vessel. An interpolating spline describes the continuous catheter path. The IVUS images are located orthogonal to the path, resulting in a non-uniform structured scalar volume of echo densities. Isocontour surfaces are used to view the vessel geometry, while transparency and clipping enable interactive exploration of interior structures. The two geometries studied are a bovine artery vascular graft having U-shape and a constriction, and a canine carotid artery having multiple branches and a constriction. Accuracy of the reconstructions is established by comparing the reconstructions to (1) silicone moulds of the vessel interior, (2) biplane x-ray images, and (3) the original echo images. Excellent shape and geometry correspondence was observed in both geometries. Quantitative measurements made at key locations of the 3D reconstructions also were in good agreement with those made in silicone moulds. The proposed technique is easily adoptable in clinical practice, since it uses x-rays with minimal exposure and existing IVUS technology. PMID:11105284

  9. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    PubMed

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  10. Antiferromagnetic Heisenberg spin-1 chain: Magnetic susceptibility of the Haldane chain described using scaling

    NASA Astrophysics Data System (ADS)

    Souletie, Jean; Drillon, Marc; Rabu, Pierre; Pati, Swapan K.

    2004-08-01

    The phenomenological expression χT/(Ng2μB2/k)=C1nexp(-W1n/T)+C2nexp(-W2n/T) describes very accurately the temperature dependence of the magnetic susceptibility computed for antiferromagnetic rings of Heisenberg spins S=1 , whose size n is even and ranges from 6 to 20. This expression has been obtained through a strategy justified by scaling considerations together with finite size numerical calculations. For n large, the coefficients of the expression converge towards C1=0.125 , W1=0.451J , C2=0.564 , W2=1.793J ( J is the exchange constant), which are appropriate for describing the susceptibility of the spin-1 Haldane chain. The Curie constant, the paramagnetic Curie-Weiss temperature, the correlation length at T=0 and the Haldane gap are found to be closely related to these coefficients. With this expression, a very good description of the magnetic behavior of Y2BaNiO5 and of Ni(C2H8N2)2NO2ClO4 (NENP), the archetype of the Haldane gap systems, is achieved over the whole temperature range.

  11. Describing Myxococcus xanthus Aggregation Using Ostwald Ripening Equations for Thin Liquid Films

    PubMed Central

    Bahar, Fatmagül; Pratt-Szeliga, Philip C.; Angus, Stuart; Guo, Jiaye; Welch, Roy D.

    2014-01-01

    When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions. PMID:25231319

  12. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  13. Using the GVB Ansatz to develop ensemble DFT method for describing multiple strongly correlated electron pairs.

    PubMed

    Filatov, Michael; Martínez, Todd J; Kim, Kwang S

    2016-08-21

    Ensemble density functional theory (DFT) furnishes a rigorous theoretical framework for describing the non-dynamic electron correlation arising from (near) degeneracy of several electronic configurations. Ensemble DFT naturally leads to fractional occupation numbers (FONs) for several Kohn-Sham (KS) orbitals, which thereby become variational parameters of the methodology. The currently available implementation of ensemble DFT in the form of the spin-restricted ensemble-referenced KS (REKS) method was originally designed for systems with only two fractionally occupied KS orbitals, which was sufficient to accurately describe dissociation of a single chemical bond or the singlet ground state of biradicaloid species. To extend applicability of the method to systems with several dissociating bonds or to polyradical species, more fractionally occupied orbitals must be included in the ensemble description. Here we investigate a possibility of developing the extended REKS methodology with the help of the generalized valence bond (GVB) wavefunction theory. The use of GVB enables one to derive a simple and physically transparent energy expression depending explicitly on the FONs of several KS orbitals. In this way, a version of the REKS method with four electrons in four fractionally occupied orbitals is derived and its accuracy in the calculation of various types of strongly correlated molecules is investigated. We propose a possible scheme to ameliorate the partial size-inconsistency that results from perfect spin-pairing. We conjecture that perfect pairing natural orbital (NO) functionals of reduced density matrix functional theory (RDMFT) should also display partial size-inconsistency. PMID:26947515

  14. HOW ACCURATE IS OUR KNOWLEDGE OF THE GALAXY BIAS?

    SciTech Connect

    More, Surhud

    2011-11-01

    Observations of the clustering of galaxies can provide useful information about the distribution of dark matter in the universe. In order to extract accurate cosmological parameters from galaxy surveys, it is important to understand how the distribution of galaxies is biased with respect to the matter distribution. The large-scale bias of galaxies can be quantified either by directly measuring the large-scale ({lambda} {approx}> 60 h{sup -1} Mpc) power spectrum of galaxies or by modeling the halo occupation distribution of galaxies using their clustering on small scales ({lambda} {approx}< 30 h{sup -1} Mpc). We compare the luminosity dependence of the galaxy bias (both the shape and the normalization) obtained by these methods and check for consistency. Our comparison reveals that the bias of galaxies obtained by the small-scale clustering measurements is systematically larger than that obtained from the large-scale power spectrum methods. We also find systematic discrepancies in the shape of the galaxy-bias-luminosity relation. We comment on the origin and possible consequences of these discrepancies which had remained unnoticed thus far.

  15. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGESBeta

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  16. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  17. Accurate quantification of supercoiled DNA by digital PCR.

    PubMed

    Dong, Lianhua; Yoo, Hee-Bong; Wang, Jing; Park, Sang-Ryoul

    2016-01-01

    Digital PCR (dPCR) as an enumeration-based quantification method is capable of quantifying the DNA copy number without the help of standards. However, it can generate false results when the PCR conditions are not optimized. A recent international comparison (CCQM P154) showed that most laboratories significantly underestimated the concentration of supercoiled plasmid DNA by dPCR. Mostly, supercoiled DNAs are linearized before dPCR to avoid such underestimations. The present study was conducted to overcome this problem. In the bilateral comparison, the National Institute of Metrology, China (NIM) optimized and applied dPCR for supercoiled DNA determination, whereas Korea Research Institute of Standards and Science (KRISS) prepared the unknown samples and quantified them by flow cytometry. In this study, several factors like selection of the PCR master mix, the fluorescent label, and the position of the primers were evaluated for quantifying supercoiled DNA by dPCR. This work confirmed that a 16S PCR master mix avoided poor amplification of the supercoiled DNA, whereas HEX labels on dPCR probe resulted in robust amplification curves. Optimizing the dPCR assay based on these two observations resulted in accurate quantification of supercoiled DNA without preanalytical linearization. This result was validated in close agreement (101~113%) with the result from flow cytometry. PMID:27063649

  18. CLOMP: Accurately Characterizing OpenMP Application Overheads

    SciTech Connect

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B R

    2008-11-10

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC.We also show that CLOMP also captures limitations for OpenMP parallelization on SMT and NUMA systems. Finally, CLOMPI, our MPI extension of CLOMP, demonstrates which aspects of OpenMP interact poorly with MPI when MPI helper threads cannot run on the NIC.

  19. Accurate quantification of supercoiled DNA by digital PCR

    PubMed Central

    Dong, Lianhua; Yoo, Hee-Bong; Wang, Jing; Park, Sang-Ryoul

    2016-01-01

    Digital PCR (dPCR) as an enumeration-based quantification method is capable of quantifying the DNA copy number without the help of standards. However, it can generate false results when the PCR conditions are not optimized. A recent international comparison (CCQM P154) showed that most laboratories significantly underestimated the concentration of supercoiled plasmid DNA by dPCR. Mostly, supercoiled DNAs are linearized before dPCR to avoid such underestimations. The present study was conducted to overcome this problem. In the bilateral comparison, the National Institute of Metrology, China (NIM) optimized and applied dPCR for supercoiled DNA determination, whereas Korea Research Institute of Standards and Science (KRISS) prepared the unknown samples and quantified them by flow cytometry. In this study, several factors like selection of the PCR master mix, the fluorescent label, and the position of the primers were evaluated for quantifying supercoiled DNA by dPCR. This work confirmed that a 16S PCR master mix avoided poor amplification of the supercoiled DNA, whereas HEX labels on dPCR probe resulted in robust amplification curves. Optimizing the dPCR assay based on these two observations resulted in accurate quantification of supercoiled DNA without preanalytical linearization. This result was validated in close agreement (101~113%) with the result from flow cytometry. PMID:27063649

  20. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  1. Novel Cortical Thickness Pattern for Accurate Detection of Alzheimer's Disease.

    PubMed

    Zheng, Weihao; Yao, Zhijun; Hu, Bin; Gao, Xiang; Cai, Hanshu; Moore, Philip

    2015-01-01

    Brain network occupies an important position in representing abnormalities in Alzheimer's disease (AD) and mild cognitive impairment (MCI). Currently, most studies only focused on morphological features of regions of interest without exploring the interregional alterations. In order to investigate the potential discriminative power of a morphological network in AD diagnosis and to provide supportive evidence on the feasibility of an individual structural network study, we propose a novel approach of extracting the correlative features from magnetic resonance imaging, which consists of a two-step approach for constructing an individual thickness network with low computational complexity. Firstly, multi-distance combination is utilized for accurate evaluation of between-region dissimilarity; and then the dissimilarity is transformed to connectivity via calculation of correlation function. An evaluation of the proposed approach has been conducted with 189 normal controls, 198 MCI subjects, and 163 AD patients using machine learning techniques. Results show that the observed correlative feature suggests significant promotion in classification performance compared with cortical thickness, with accuracy of 89.88% and area of 0.9588 under receiver operating characteristic curve. We further improved the performance by integrating both thickness and apolipoprotein E ɛ4 allele information with correlative features. New achieved accuracies are 92.11% and 79.37% in separating AD from normal controls and AD converters from non-converters, respectively. Differences between using diverse distance measurements and various correlation transformation functions are also discussed to explore an optimal way for network establishment. PMID:26444768

  2. Rapid and accurate calculation of protein 1H, 13C and 15N chemical shifts.

    PubMed

    Neal, Stephen; Nip, Alex M; Zhang, Haiyan; Wishart, David S

    2003-07-01

    A computer program (SHIFTX) is described which rapidly and accurately calculates the diamagnetic 1H, 13C and 15N chemical shifts of both backbone and sidechain atoms in proteins. The program uses a hybrid predictive approach that employs pre-calculated, empirically derived chemical shift hypersurfaces in combination with classical or semi-classical equations (for ring current, electric field, hydrogen bond and solvent effects) to calculate 1H, 13C and 15N chemical shifts from atomic coordinates. The chemical shift hypersurfaces capture dihedral angle, sidechain orientation, secondary structure and nearest neighbor effects that cannot easily be translated to analytical formulae or predicted via classical means. The chemical shift hypersurfaces were generated using a database of IUPAC-referenced protein chemical shifts--RefDB (Zhang et al., 2003), and a corresponding set of high resolution (<2.1 A) X-ray structures. Data mining techniques were used to extract the largest pairwise contributors (from a list of approximately 20 derived geometric, sequential and structural parameters) to generate the necessary hypersurfaces. SHIFTX is rapid (<1 CPU second for a complete shift calculation of 100 residues) and accurate. Overall, the program was able to attain a correlation coefficient (r) between observed and calculated shifts of 0.911 (1Halpha), 0.980 (13Calpha), 0.996 (13Cbeta), 0.863 (13CO), 0.909 (15N), 0.741 (1HN), and 0.907 (sidechain 1H) with RMS errors of 0.23, 0.98, 1.10, 1.16, 2.43, 0.49, and 0.30 ppm, respectively on test data sets. We further show that the agreement between observed and SHIFTX calculated chemical shifts can be an extremely sensitive measure of the quality of protein structures. Our results suggest that if NMR-derived structures could be refined using heteronuclear chemical shifts calculated by SHIFTX, their precision could approach that of the highest resolution X-ray structures. SHIFTX is freely available as a web server at http

  3. Evaluation of the MALDI-TOF MS profiling for identification of newly described Aeromonas spp.

    PubMed

    Vávrová, Andrea; Balážová, Tereza; Sedláček, Ivo; Tvrzová, Ludmila; Šedo, Ondrej

    2015-09-01

    The genus Aeromonas comprises primarily aquatic bacteria and also serious human and animal pathogens with the occurrence in clinical material, drinking water, and food. Aeromonads are typical for their complex taxonomy and nomenclature and for limited possibilities of identification to the species level. According to studies describing the use of MALDI-TOF MS in diagnostics of aeromonads, this modern chemotaxonomical approach reveals quite high percentage of correctly identified isolates. We analyzed 64 Aeromonas reference strains from the set of 27 species. After extending the range of analyzed Aeromonas species by newly described ones, we proved that MALDI-TOF MS procedure accompanied by Biotyper tool is not a reliable diagnostic technique for aeromonads. We obtained quite high percentage of false-positive, incorrect, and uncertain results. The identification of newly described species is accompanied with misidentifications that were observed also in the case of pathogenic aeromonads. PMID:25520239

  4. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  5. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  6. On canonical cylinder sections for accurate determination of contact angle in microgravity

    NASA Technical Reports Server (NTRS)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  7. Accurate determination of the interaction between Λ hyperons and nucleons from auxiliary field diffusion Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Lonardoni, D.; Pederiva, F.; Gandolfi, S.

    2014-01-01

    Background: An accurate assessment of the hyperon-nucleon interaction is of great interest in view of recent observations of very massive neutron stars. The challenge is to build a realistic interaction that can be used over a wide range of masses and in infinite matter starting from the available experimental data on the binding energy of light hypernuclei. To this end, accurate calculations of the hyperon binding energy in a hypernucleus are necessary. Purpose: We present a quantum Monte Carlo study of Λ and ΛΛ hypernuclei up to A =91. We investigate the contribution of two- and three-body Λ-nucleon forces to the Λ binding energy. Method: Ground state energies are computed solving the Schrödinger equation for nonrelativistic baryons by means of the auxiliary field diffusion Monte Carlo algorithm extended to the hypernuclear sector. Results: We show that a simple adjustment of the parameters of the ΛNN three-body force yields a very good agreement with available experimental data over a wide range of hypernuclear masses. In some cases no experiments have been performed yet, and we give new predictions. Conclusions: The newly fitted ΛNN force properly describes the physics of medium-heavy Λ hypernuclei, correctly reproducing the saturation property of the hyperon separation energy.

  8. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    NASA Astrophysics Data System (ADS)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  9. Some strategies to address the challenges of collecting observational data in a busy clinical environment.

    PubMed

    Jackson, Debra; McDonald, Glenda; Luck, Lauretta; Waine, Melissa; Wilkes, Lesley

    2016-01-01

    Studies drawing on observational methods can provide vital data to enhance healthcare. However, collecting observational data in clinical settings is replete with challenges, particularly where multiple data-collecting observers are used. Observers collecting data require shared understanding and training to ensure data quality, and particularly, to confirm accurate and consistent identification, discrimination and recording of data. The aim of this paper is to describe strategies for preparing and supporting multiple researchers tasked with collecting observational data in a busy, and often unpredictable, hospital environment. We hope our insights might assist future researchers undertaking research in similar settings. PMID:27188039

  10. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2012-12-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and were used to solve entirely different problems. We show that by combining two classical models, namely Boussinesq's Equation describing spring baseflow recession and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean residence time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater residence time that can refine those obtained from tritium measurements. This approach is demonstrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the best agreement between observed and predicted time of trend reversal was reached for the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating the stronger influence of continuous groundwater recharge during the dry period.

  11. The challenge of accurately documenting bee species richness in agroecosystems: bee diversity in eastern apple orchards.

    PubMed

    Russo, Laura; Park, Mia; Gibbs, Jason; Danforth, Bryan

    2015-09-01

    Bees are important pollinators of agricultural crops, and bee diversity has been shown to be closely associated with pollination, a valuable ecosystem service. Higher functional diversity and species richness of bees have been shown to lead to higher crop yield. Bees simultaneously represent a mega-diverse taxon that is extremely challenging to sample thoroughly and an important group to understand because of pollination services. We sampled bees visiting apple blossoms in 28 orchards over 6 years. We used species rarefaction analyses to test for the completeness of sampling and the relationship between species richness and sampling effort, orchard size, and percent agriculture in the surrounding landscape. We performed more than 190 h of sampling, collecting 11,219 specimens representing 104 species. Despite the sampling intensity, we captured <75% of expected species richness at more than half of the sites. For most of these, the variation in bee community composition between years was greater than among sites. Species richness was influenced by percent agriculture, orchard size, and sampling effort, but we found no factors explaining the difference between observed and expected species richness. Competition between honeybees and wild bees did not appear to be a factor, as we found no correlation between honeybee and wild bee abundance. Our study shows that the pollinator fauna of agroecosystems can be diverse and challenging to thoroughly sample. We demonstrate that there is high temporal variation in community composition and that sites vary widely in the sampling effort required to fully describe their diversity. In order to maximize pollination services provided by wild bee species, we must first accurately estimate species richness. For researchers interested in providing this estimate, we recommend multiyear studies and rarefaction analyses to quantify the gap between observed and expected species richness. PMID:26380684

  12. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    NASA Astrophysics Data System (ADS)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.

  13. Accurate single-molecule FRET studies using multiparameter fluorescence detection.

    PubMed

    Sisamakis, Evangelos; Valeri, Alessandro; Kalinin, Stanislav; Rothwell, Paul J; Seidel, Claus A M

    2010-01-01

    In the recent decade, single-molecule (sm) spectroscopy has come of age and is providing important insight into how biological molecules function. So far our view of protein function is formed, to a significant extent, by traditional structure determination showing many beautiful static protein structures. Recent experiments by single-molecule and other techniques have questioned the idea that proteins and other biomolecules are static structures. In particular, Förster resonance energy transfer (FRET) studies of single molecules have shown that biomolecules may adopt many conformations as they perform their function. Despite the success of sm-studies, interpretation of smFRET data are challenging since they can be complicated due to many artifacts arising from the complex photophysical behavior of fluorophores, dynamics, and motion of fluorophores, as well as from small amounts of contaminants. We demonstrate that the simultaneous acquisition of a maximum of fluorescence parameters by multiparameter fluorescence detection (MFD) allows for a robust assessment of all possible artifacts arising from smFRET and offers unsurpassed capabilities regarding the identification and analysis of individual species present in a population of molecules. After a short introduction, the data analysis procedure is described in detail together with some experimental considerations. The merits of MFD are highlighted further with the presentation of some applications to proteins and nucleic acids, including accurate structure determination based on FRET. A toolbox is introduced in order to demonstrate how complications originating from orientation, mobility, and position of fluorophores have to be taken into account when determining FRET-related distances with high accuracy. Furthermore, the broad time resolution (picoseconds to hours) of MFD allows for kinetic studies that resolve interconversion events between various subpopulations as a biomolecule of interest explores its

  14. Accurate LC Peak Boundary Detection for 16O/18O Labeled LC-MS Data

    PubMed Central

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang (SJ); Zhang, Jianqiu (Michelle)

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements. PMID:24115998

  15. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    NASA Astrophysics Data System (ADS)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  16. Enumerating the Progress of SETI Observations

    NASA Astrophysics Data System (ADS)

    Lesh, Lindsay; Tarter, Jill C.

    2015-01-01

    In a long-term project like SETI, accurate archiving of observations is imperative. This requires a database that is both easy to search - in order to know what data has or hasn't been acquired - and easy to update, no matter what form the results of an observation might be reported in. If the data can all be standardized, then the parameters of the nine-dimensional search space (including space, time, frequency (and bandwidth), sensitivity, polarization and modulation scheme) of completed observations for engineered signals can be calculated and compared to the total possible search volume. Calculating a total search volume that includes more than just spatial dimensions needs an algorithm that can adapt to many different variables, (e.g. each receiving instrument's capabilities). The method of calculation must also remain consistent when applied to each new SETI observation if an accurate fraction of the total search volume is to be found. Any planned observations can be evaluated against what has already been done in order to assess the efficacy of a new search. Progress against a desired goal can be evaluated, and the significance of null results can be properly understood.This paper describes a new, user-friendly archive and standardized computational tool that are being built at the SETI Institute in order to greatly ease the addition of new entries and the calculation of the search volume explored to date. The intent is to encourage new observers to better report the parameters and results of their observations, and to improve public understanding of ongoing progress and the importance of continuing the search for ETI signals into the future.

  17. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  18. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    NASA Astrophysics Data System (ADS)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  19. The Open Geospatial Consortium PUCK Standard: Building Sensor Networks with Self-Describing Instruments

    NASA Astrophysics Data System (ADS)

    O'Reilly, T. C.; Broering, A.; del Rio, J.; Headley, K. L.; Toma, D.; Bermudez, L. E.; Edgington, D.; Fredericks, J.; Manuel, A.

    2012-12-01

    Sensor technology is rapidly advancing, enabling smaller and cheaper instruments to monitor Earth's environment. It is expected that many more kinds and quantities of networked environmental sensors will be deployed in coming years. Knowledge of each instrument's command protocol is required to operate and acquire data from the network. Making sense of these data streams to create an integrated picture of environmental conditions requires that each instrument's data and metadata be accurately processed and that "suspect" data be flagged. Use of standards to operate an instrument and retrieve and describe its data generally simplifies instrument software development, integration, operation and data processing. The Open Geospatial Consortium (OGC) PUCK protocol enables instruments that describe themselves in a standard way. OGC PUCK defines a small "data sheet" that describes key instrument characteristics, and a standard protocol to retrieve the data sheet from the device itself. Data sheet fields include a universal serial number that is unique across all PUCK-compliant instruments. Other fields identify the instrument manufacturer and model. In addition to the data sheet, the instrument may also provide a "PUCK payload" which can contain additional descriptive information (e.g. a SensorML document or IEEE 1451 TEDS), as well as actual instrument "driver" code. Computers on the sensor network can use PUCK protocol to retrieve this information from installed instruments and utilize it appropriately, e.g. to automatically identify, configure and operate the instruments, and acquire and process their data. The protocol is defined for instruments with an RS232 or Ethernet interface. OGC members recently voted to adopt PUCK as a component of the OGC's Sensor Web Enablement (SWE) standards. The protocol is also supported by a consortium of hydrographic instrument manufacturers and has been implemented by several of them (https://sites.google.com/site/soscsite/). Thus far

  20. Observations of accreting pulsars

    NASA Technical Reports Server (NTRS)

    Prince, Thomas A.; Bildsten, Lars; Chakrabarty, Deepto; Wilson, Robert B.; Finger, Mark H.

    1994-01-01

    We discuss recent observations of accreting binary pulsars with the all-sky BATSE instrument on the Compton Gamma Ray Observatory. BATSE has detected and studied nearly half of the known accreting pulsar systems. Continuous timing studies over a two-year period have yielded accurate orbital parameters for 9 of these systems, as well as new insights into long-term accretion torque histories.

  1. Using scale dependent variation in soil properties to describe soil landscape relationships through DSM

    NASA Astrophysics Data System (ADS)

    Corstanje, Ronald; Mayr, Thomas

    2016-04-01

    DSM formalizes the relationship between soil forming factors and the landscape in which they are formed and aims to capture and model the intrinsic spatial variability naturally observed in soils. Covariates, the landscape factors recognized as governing soil formation, vary at different scales and this spatial variation at some scales may be more strongly correlated with soil than at others. Soil forming factors have different domains with distinctive scales, for example geology operates at a coarser scale than land use. By understanding the quantitative relationships between soil and soil forming factors, and their scale dependency, we can start determining the importance of landscape level processes on the formation and observed variation in soils. Three study areas, covered by detailed reconnaissance soil survey, were identified in the Republic of Ireland. Their different pedological and geomorphological characteristics allowed to test scale dependent behaviors across the spectrum of conditions present in the Irish landscape. We considered here three approaches, i) an empirical diagnostic tool in which DSM was applied across a range of scales (20 to 260 m2), ii) the application of wavelets to decompose the DEMs into a series of independent components at varying scales and then used in DSM and finally, iii) a multiscale, window based geostatistical based approach. Applied as a diagnostic approach, we found that wavelets and window based, multiscale geostatistics were effective in identifying the main scales of interaction of the key soil landscape factors (e.g. terrain, geology, land use etc.) and in partitioning the landscape accordingly, we were able to accurately reproduce the observed spatial variation in soils.

  2. An algorithm to detect and communicate the differences in computational models describing biological systems

    PubMed Central

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-01-01

    Motivation: Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model’s development over time. Results: Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models’ encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model’s history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. Availability and implementation: The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. Contact: martin.scharm@uni-rostock.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26490504

  3. Describing Sequencing Results of Structural Chromosome Rearrangements with a Suggested Next-Generation Cytogenetic Nomenclature

    PubMed Central

    Ordulu, Zehra; Wong, Kristen E.; Currall, Benjamin B.; Ivanov, Andrew R.; Pereira, Shahrin; Althari, Sara; Gusella, James F.; Talkowski, Michael E.; Morton, Cynthia C.

    2014-01-01

    With recent rapid advances in genomic technologies, precise delineation of structural chromosome rearrangements at the nucleotide level is becoming increasingly feasible. In this era of “next-generation cytogenetics” (i.e., an integration of traditional cytogenetic techniques and next-generation sequencing), a consensus nomenclature is essential for accurate communication and data sharing. Currently, nomenclature for describing the sequencing data of these aberrations is lacking. Herein, we present a system called Next-Gen Cytogenetic Nomenclature, which is concordant with the International System for Human Cytogenetic Nomenclature (2013). This system starts with the alignment of rearrangement sequences by BLAT or BLAST (alignment tools) and arrives at a concise and detailed description of chromosomal changes. To facilitate usage and implementation of this nomenclature, we are developing a program designated BLA(S)T Output Sequence Tool of Nomenclature (BOSToN), a demonstrative version of which is accessible online. A standardized characterization of structural chromosomal rearrangements is essential both for research analyses and for application in the clinical setting. PMID:24746958

  4. Can an ab initio three-body virial equation describe the mercury gas phase?

    PubMed

    Wiebke, J; Wormit, M; Hellmann, R; Pahl, E; Schwerdtfeger, P

    2014-03-27

    We report a sixth-order ab initio virial equation of state (EOS) for mercury. The virial coefficients were determined in the temperature range from 500 to 7750 K using a three-body approximation to the N-body interaction potential. The underlying two-body and three-body potentials were fitted to highly accurate Coupled-Cluster interaction energies of Hg2 (Pahl, E.; Figgen, D.; Thierfelder, C.; Peterson, K. A.; Calvo, F.; Schwerdtfeger, P. J. Chem. Phys. 2010, 132, 114301-1) and equilateral-triangular configurations of Hg3. We find the virial coefficients of order four and higher to be negative and to have large absolute values over the entire temperature range considered. The validity of our three-body, sixth-order EOS seems to be limited to small densities of about 1.5 g cm(-3) and somewhat higher densities at higher temperatures. Termwise analysis and comparison to experimental gas-phase data suggest a small convergence radius of the virial EOS itself as well as a failure of the three-body interaction model (i.e., poor convergence of the many-body expansion for mercury). We conjecture that the nth-order term of the virial EOS is to be evaluated from the full n-body interaction potential for a quantitative picture. Consequently, an ab initio three-body virial equation cannot describe the mercury gas phase. PMID:24547987

  5. Monte Carlo package for simulating radiographic images of realistic anthropomorphic phantoms described by triangle meshes

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Kyprianou, Iacovos; Badano, Aldo; Sempau, Josep; Myers, Kyle J.

    2007-03-01

    X-ray imaging system optimization increases the benefit-to-cost ratio by reducing the radiation dose to the patient while maximizing image quality. We present a new simulation tool for the generation of realistic medical x-ray images for assessment and optimization of complete imaging systems. The Monte Carlo code simulates radiation transport physics using the subroutine package PENELOPE, which accurately simulates the transport of electrons and photons within the typical medical imaging energy range. The new code implements a novel object-oriented geometry package that allows simulations with homogeneous objects of arbitrary shapes described by triangle meshes. The flexibility of this code, which uses the industry standard PLY input-file format, allows the use of detailed anatomical models developed using computer-aided design tools applied to segmented CT and MRI data. The use of triangle meshes highly simplifies the ray-tracing algorithm without reducing the generality of the code, since most surface models can be tessellated into triangles while retaining their geometric details. Our algorithm incorporates an octree spatial data structure to sort the triangles and accelerate the simulation, reaching execution speeds comparable to the original quadric geometry model of PENELOPE. Coronary angiograms were simulated using a tessellated version of the NURBS-based Cardiac-Torso (NCAT) phantom. The phantom models 330 objects, comprised in total of 5 million triangles. The dose received by each organ and the contribution of the different scattering processes to the final image were studied in detail.

  6. The usefulness of higher-order constitutive relations for describing the Knudsen layer.

    SciTech Connect

    Gallis, Michail A.; Lockerby, Duncan A.; Reese, Jason M.

    2005-03-01

    The Knudsen layer is an important rarefaction phenomenon in gas flows in and around microdevices. Its accurate and efficient modeling is of critical importance in the design of such systems and in predicting their performance. In this paper we investigate the potential that higher-order continuum equations may have to model the Knudsen layer, and compare their predictions to high-accuracy DSMC (direct simulation Monte Carlo) data, as well as a standard result from kinetic theory. We find that, for a benchmark case, the most common higher-order continuum equation sets (Grad's 13 moment, Burnett, and super-Burnett equations) cannot capture the Knudsen layer. Variants of these equation families have, however, been proposed and some of them can qualitatively describe the Knudsen layer structure. To make quantitative comparisons, we obtain additional boundary conditions (needed for unique solutions to the higher-order equations) from kinetic theory. However, we find the quantitative agreement with kinetic theory and DSMC data is only slight.

  7. A method for analysis of phenotypic change for phenotypes described by high-dimensional data.

    PubMed

    Collyer, M L; Sekora, D J; Adams, D C

    2015-10-01

    The analysis of phenotypic change is important for several evolutionary biology disciplines, including phenotypic plasticity, evolutionary developmental biology, morphological evolution, physiological evolution, evolutionary ecology and behavioral evolution. It is common for researchers in these disciplines to work with multivariate phenotypic data. When phenotypic variables exceed the number of research subjects--data called 'high-dimensional data'--researchers are confronted with analytical challenges. Parametric tests that require high observation to variable ratios present a paradox for researchers, as eliminating variables potentially reduces effect sizes for comparative analyses, yet test statistics require more observations than variables. This problem is exacerbated with data that describe 'multidimensional' phenotypes, whereby a description of phenotype requires high-dimensional data. For example, landmark-based geometric morphometric data use the Cartesian coordinates of (potentially) many anatomical landmarks to describe organismal shape. Collectively such shape variables describe organism shape, although the analysis of each variable, independently, offers little benefit for addressing biological questions. Here we present a nonparametric method of evaluating effect size that is not constrained by the number of phenotypic variables, and motivate its use with example analyses of phenotypic change using geometric morphometric data. Our examples contrast different characterizations of body shape for a desert fish species, associated with measuring and comparing sexual dimorphism between two populations. We demonstrate that using more phenotypic variables can increase effect sizes, and allow for stronger inferences. PMID:25204302

  8. How accurate are the weather forecasts for Bierun (southern Poland)?

    NASA Astrophysics Data System (ADS)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  9. Accurate Sound Velocity Measurement in Ocean Near-Surface Layer

    NASA Astrophysics Data System (ADS)

    Lizarralde, D.; Xu, B. L.

    2015-12-01

    Accurate sound velocity measurement is essential in oceanography because sound is the only wave that can propagate in sea water. Due to its measuring difficulties, sound velocity is often not measured directly but instead calculated from water temperature, salinity, and depth, which are much easier to obtain. This research develops a new method to directly measure the sound velocity in the ocean's near-surface layer using multi-channel seismic (MCS) hydrophones. This system consists of a device to make a sound pulse and a long cable with hundreds of hydrophones to record the sound. The distance between the source and each receiver is the offset. The time it takes the pulse to arrive to each receiver is the travel time.The errors of measuring offset and travel time will affect the accuracy of sound velocity if we calculated with just one offset and one travel time. However, by analyzing the direct arrival signal from hundreds of receivers, the velocity can be determined as the slope of a straight line in the travel time-offset graph. The errors in distance and time measurement result in only an up or down shift of the line and do not affect the slope. This research uses MCS data of survey MGL1408 obtained from the Marine Geoscience Data System and processed with Seismic Unix. The sound velocity can be directly measured to an accuracy of less than 1m/s. The included graph shows the directly measured velocity verses the calculated velocity along 100km across the Mid-Atlantic continental margin. The directly measured velocity shows a good coherence to the velocity computed from temperature and salinity. In addition, the fine variations in the sound velocity can be observed, which is hardly seen from the calculated velocity. Using this methodology, both large area acquisition and fine resolution can be achieved. This directly measured sound velocity will be a new and powerful tool in oceanography.

  10. A fast and accurate computational approach to protein ionization

    PubMed Central

    Spassov, Velin Z.; Yan, Lisa

    2008-01-01

    We report a very fast and accurate physics-based method to calculate pH-dependent electrostatic effects in protein molecules and to predict the pK values of individual sites of titration. In addition, a CHARMm-based algorithm is included to construct and refine the spatial coordinates of all hydrogen atoms at a given pH. The present method combines electrostatic energy calculations based on the Generalized Born approximation with an iterative mobile clustering approach to calculate the equilibria of proton binding to multiple titration sites in protein molecules. The use of the GBIM (Generalized Born with Implicit Membrane) CHARMm module makes it possible to model not only water-soluble proteins but membrane proteins as well. The method includes a novel algorithm for preliminary refinement of hydrogen coordinates. Another difference from existing approaches is that, instead of monopeptides, a set of relaxed pentapeptide structures are used as model compounds. Tests on a set of 24 proteins demonstrate the high accuracy of the method. On average, the RMSD between predicted and experimental pK values is close to 0.5 pK units on this data set, and the accuracy is achieved at very low computational cost. The pH-dependent assignment of hydrogen atoms also shows very good agreement with protonation states and hydrogen-bond network observed in neutron-diffraction structures. The method is implemented as a computational protocol in Accelrys Discovery Studio and provides a fast and easy way to study the effect of pH on many important mechanisms such as enzyme catalysis, ligand binding, protein–protein interactions, and protein stability. PMID:18714088

  11. Accurate source location from P waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  12. Accurate source location from waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  13. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  14. Position observations of comet Hyakutake.

    NASA Astrophysics Data System (ADS)

    Wu, Guangjie; Ji, Kaifan

    On March 16 of 1996, the authors used a new developed CCD camera attached to the 1 meter telescope at Yunnan Observatory, to take photometric observations for the comet Hyakutake. The positions have been measured accurately. From images observed, one can clearly see that the cometary coma is very large and basically symmetric with a peach shape.

  15. Accurate GPS Time-Linked data Acquisition System (ATLAS II) user's manual.

    SciTech Connect

    Jones, Perry L.; Zayas, Jose R.; Ortiz-Moyet, Juan

    2004-02-01

    The Accurate Time-Linked data Acquisition System (ATLAS II) is a small, lightweight, time-synchronized, robust data acquisition system that is capable of acquiring simultaneous long-term time-series data from both a wind turbine rotor and ground-based instrumentation. This document is a user's manual for the ATLAS II hardware and software. It describes the hardware and software components of ATLAS II, and explains how to install and execute the software.

  16. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  17. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.

    1996-01-01

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.

  18. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.

    1996-09-03

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.

  19. An adaptive, formally second order accurate version of the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.

    2007-04-01

    Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves

  20. A numerical model of the fracture healing process that describes tissue development and revascularisation.

    PubMed

    Simon, U; Augat, P; Utz, M; Claes, L

    2011-01-01

    A dynamic model was developed to simulate complex interactions of mechanical stability, revascularisation and tissue differentiation in secondary fracture healing. Unlike previous models, blood perfusion was included as a spatio-temporal state variable to simulate the revascularisation process. A 2D, axisymmetrical finite element model described fracture callus mechanics. Fuzzy logic rules described the following biological processes: angiogenesis, intramembranous ossification, chondrogenesis, cartilage calcification and endochondral ossification, all of which depended on local strain state and local blood perfusion. In order to evaluate how the predicted revascularisation depended on the mechanical environment, we simulated two different healing cases according to two groups of transverse metatarsal osteotomies in sheep with different axial stability. The model predicted slower revascularisation and delayed bony bridging for the less stable case, which corresponded well to the experimental observations. A revascularisation sensitivity analysis demonstrated the potential of the model to account for different conditions regarding the blood supply. PMID:21086207

  1. History and progress on accurate measurements of the Planck constant

    NASA Astrophysics Data System (ADS)

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  2. History and progress on accurate measurements of the Planck constant.

    PubMed

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  3. Stochastic Oscillations of General Relativistic Disks Described by a Fractional Langevin Equation with Fractional Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Zhi-Yun, Wang; Pei-Jie, Chen

    2016-06-01

    A generalized Langevin equation driven by fractional Brownian motion is used to describe the vertical oscillations of general relativistic disks. By means of numerical calculation method, the displacements, velocities and luminosities of oscillating disks are explicitly obtained for different Hurst exponent H. The results show that as H increases, the energies and luminosities of oscillating disk are enhanced, and the spectral slope at high frequencies of the power spectrum density of disk luminosity is also increased. This could explain the observational features related to the Intra Day Variability of the BL Lac objects.

  4. Merging quantum-chemistry with B-splines to describe molecular photoionization

    NASA Astrophysics Data System (ADS)

    Argenti, L.; Marante, C.; Klinker, M.; Corral, I.; Gonzalez, J.; Martin, F.

    2016-05-01

    Theoretical description of observables in attosecond pump-probe experiments requires a good representation of the system's ionization continuum. For polyelectronic atoms and molecules, however, this is still a challenge, due to the complicated short-range structure of correlated electronic wavefunctions. Whereas quantum chemistry packages (QCP) implementing sophisticated methods to compute bound electronic molecular states are well established, comparable tools for the continuum are not widely available yet. To tackle this problem, we have developed a new approach that, by means of a hybrid Gaussian-B-spline basis, interfaces existing QCPs with close-coupling scattering methods. To illustrate the viability of this approach, we report results for the multichannel ionization of the helium atom and of the hydrogen molecule that are in excellent agreement with existing accurate benchmarks. These findings, together with the flexibility of QCPs, make of this approach a good candidate for the theoretical study of the ionization of poly-electronic systems. FP7/ERC Grant XCHEM 290853.

  5. Observing System Simulation Experiments

    NASA Technical Reports Server (NTRS)

    Prive, Nikki

    2015-01-01

    This presentation gives an overview of Observing System Simulation Experiments (OSSEs). The components of an OSSE are described, along with discussion of the process for validating, calibrating, and performing experiments. a.

  6. Accurate body composition measures from whole-body silhouettes

    PubMed Central

    Xie, Bowen; Avila, Jesus I.; Ng, Bennett K.; Fan, Bo; Loo, Victoria; Gilsanz, Vicente; Hangartner, Thomas; Kalkwarf, Heidi J.; Lappe, Joan; Oberfield, Sharon; Winer, Karen; Zemel, Babette; Shepherd, John A.

    2015-01-01

    Purpose: Obesity and its consequences, such as diabetes, are global health issues that burden about 171 × 106 adult individuals worldwide. Fat mass index (FMI, kg/m2), fat-free mass index (FFMI, kg/m2), and percent fat mass may be useful to evaluate under- and overnutrition and muscle development in a clinical or research environment. This proof-of-concept study tested whether frontal whole-body silhouettes could be used to accurately measure body composition parameters using active shape modeling (ASM) techniques. Methods: Binary shape images (silhouettes) were generated from the skin outline of dual-energy x-ray absorptiometry (DXA) whole-body scans of 200 healthy children of ages from 6 to 16 yr. The silhouette shape variation from the average was described using an ASM, which computed principal components for unique modes of shape. Predictive models were derived from the modes for FMI, FFMI, and percent fat using stepwise linear regression. The models were compared to simple models using demographics alone [age, sex, height, weight, and body mass index z-scores (BMIZ)]. Results: The authors found that 95% of the shape variation of the sampled population could be explained using 26 modes. In most cases, the body composition variables could be predicted similarly between demographics-only and shape-only models. However, the combination of shape with demographics improved all estimates of boys and girls compared to the demographics-only model. The best prediction models for FMI, FFMI, and percent fat agreed with the actual measures with R2 adj. (the coefficient of determination adjusted for the number of parameters used in the model equation) values of 0.86, 0.95, and 0.75 for boys and 0.90, 0.89, and 0.69 for girls, respectively. Conclusions: Whole-body silhouettes in children may be useful to derive estimates of body composition including FMI, FFMI, and percent fat. These results support the feasibility of measuring body composition variables from simple

  7. Quantitative Proteome Analysis of Human Plasma Following in vivo Lipopolysaccharide Administration using 16O/18O Labeling and the Accurate Mass and Time Tag Approach

    PubMed Central

    Qian, Wei-Jun; Monroe, Matthew E.; Liu, Tao; Jacobs, Jon M.; Anderson, Gordon A.; Shen, Yufeng; Moore, Ronald J.; Anderson, David J.; Zhang, Rui; Calvano, Steve E.; Lowry, Stephen F.; Xiao, Wenzhong; Moldawer, Lyle L.; Davis, Ronald W.; Tompkins, Ronald G.; Camp, David G.; Smith, Richard D.

    2007-01-01

    SUMMARY Identification of novel diagnostic or therapeutic biomarkers from human blood plasma would benefit significantly from quantitative measurements of the proteome constituents over a range of physiological conditions. Herein we describe an initial demonstration of proteome-wide quantitative analysis of human plasma. The approach utilizes post-digestion trypsin-catalyzed 16O/18O peptide labeling, two-dimensional liquid chromatography (LC)-Fourier transform ion cyclotron resonance ((FTICR) mass spectrometry, and the accurate mass and time (AMT) tag strategy to identify and quantify peptides/proteins from complex samples. A peptide accurate mass and LC-elution time AMT tag database was initially generated using tandem mass spectrometry (MS/MS) following extensive multidimensional LC separations to provide the basis for subsequent peptide identifications. The AMT tag database contains >8,000 putative identified peptides, providing 938 confident plasma protein identifications. The quantitative approach was applied without depletion for high abundant proteins for comparative analyses of plasma samples from an individual prior to and 9 h after lipopolysaccharide (LPS) administration. Accurate quantification of changes in protein abundance was demonstrated by both 1:1 labeling of control plasma and the comparison between the plasma samples following LPS administration. A total of 429 distinct plasma proteins were quantified from the comparative analyses and the protein abundances for 25 proteins, including several known inflammatory response mediators, were observed to change significantly following LPS administration. PMID:15753121

  8. Observing Double Stars

    NASA Astrophysics Data System (ADS)

    Genet, Russell M.; Fulton, B. J.; Bianco, Federica B.; Martinez, John; Baxter, John; Brewer, Mark; Carro, Joseph; Collins, Sarah; Estrada, Chris; Johnson, Jolyon; Salam, Akash; Wallen, Vera; Warren, Naomi; Smith, Thomas C.; Armstrong, James D.; McGaughey, Steve; Pye, John; Mohanan, Kakkala; Church, Rebecca

    2012-05-01

    Double stars have been systematically observed since William Herschel initiated his program in 1779. In 1803 he reported that, to his surprise, many of the systems he had been observing for a quarter century were gravitationally bound binary stars. In 1830 the first binary orbital solution was obtained, leading eventually to the determination of stellar masses. Double star observations have been a prolific field, with observations and discoveries - often made by students and amateurs - routinely published in a number of specialized journals such as the Journal of Double Star Observations. All published double star observations from Herschel's to the present have been incorporated in the Washington Double Star Catalog. In addition to reviewing the history of visual double stars, we discuss four observational technologies and illustrate these with our own observational results from both California and Hawaii on telescopes ranging from small SCTs to the 2-meter Faulkes Telescope North on Haleakala. Two of these technologies are visual observations aimed primarily at published "hands-on" student science education, and CCD observations of both bright and very faint doubles. The other two are recent technologies that have launched a double star renaissance. These are lucky imaging and speckle interferometry, both of which can use electron-multiplying CCD cameras to allow short (30 ms or less) exposures that are read out at high speed with very low noise. Analysis of thousands of high speed exposures allows normal seeing limitations to be overcome so very close doubles can be accurately measured.

  9. Nonexposure accurate location K-anonymity algorithm in LBS.

    PubMed

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  10. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  11. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  12. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2013-05-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and have been used to solve entirely different problems. We show that by combining two classical models, namely the Boussinesq equation describing spring baseflow recession, and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean transit time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater transit time that can refine those obtained from tritium measurements. The approach is illustrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the actual time of trend reversal and the rate of change agreed extremely well with the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating a stronger influence of continuous groundwater recharge during the summer months.

  13. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  14. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  15. A simple framework to describe the regulation of gene expression in prokaryotes.

    PubMed

    Alves, Filipa; Dilão, Rui

    2005-05-01

    Based on the bimolecular mass action law and the derived mass conservation laws, we propose a mathematical framework in order to describe the regulation of gene expression in prokaryotes. It is shown that the derived models have all the qualitative properties of the activation and inhibition regulatory mechanisms observed in experiments. The basic construction considers genes as templates for protein production, where regulation processes result from activators or repressors connecting to DNA binding sites. All the parameters in the models have a straightforward biological meaning. After describing the general properties of the basic mechanisms of positive and negative gene regulation, we apply this framework to the self-regulation of the trp operon and to the genetic switch involved in the regulation of the lac operon. One of the consequences of this approach is the existence of conserved quantities depending on the initial conditions that tune bifurcations of fixed points. This leads naturally to a simple explanation of threshold effects as observed in some experiments. PMID:15948632

  16. Development of a Composite Non-Electrostatic Surface Complexation Model Describing Plutonium Sorption to Aluminosilicates

    SciTech Connect

    Powell, B A; Kersting, A; Zavarin, M; Zhao, P

    2008-10-28

    Due to their ubiquity in nature and chemical reactivity, aluminosilicate minerals play an important role in retarding actinide subsurface migration. However, very few studies have examined Pu interaction with clay minerals in sufficient detail to produce a credible mechanistic model of its behavior. In this work, Pu(IV) and Pu(V) interactions with silica, gibbsite (Aloxide), and Na-montmorillonite (smectite clay) were examined as a function of time and pH. Sorption of Pu(IV) and Pu(V) to gibbsite and silica increased with pH (4 to 10). The Pu(V) sorption edge shifted to lower pH values over time and approached that of Pu(IV). This behavior is apparently due to surface mediated reduction of Pu(V) to Pu(IV). Surface complexation constants describing Pu(IV)/Pu(V) sorption to aluminol and silanol groups were developed from the silica and gibbsite sorption experiments and applied to the montmorillonite dataset. The model provided an acceptable fit to the montmorillonite sorption data for Pu(V). In order to accurately predict Pu(IV) sorption to montmorillonite, the model required inclusion of ion exchange. The objective of this work is to measure the sorption of Pu(IV) and Pu(V) to silica, gibbsite, and smectite (montmorillonite). Aluminosilicate minerals are ubiquitous at the Nevada National Security Site and improving our understanding of Pu sorption to aluminosilicates (smectite clays in particular) is essential to the accurate prediction of Pu transport rates. These data will improve the mechanistic approach for modeling the hydrologic source term (HST) and provide sorption Kd parameters for use in CAU models. In both alluvium and tuff, aluminosilicates have been found to play a dominant role in the radionuclide retardation because their abundance is typically more than an order of magnitude greater than other potential sorbing minerals such as iron and manganese oxides (e.g. Vaniman et al., 1996). The sorption database used in recent HST models (Carle et al., 2006

  17. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  18. Thermoluminescence systems with two or more glow peaks described by anomalous kinetic parameters

    SciTech Connect

    Levy, P.W.

    1983-01-01

    The usual first and second order TL kinetic expressions are based on a number of assumptions, including the usually unstated assumption that charges released from one type of trap, giving rise to one glow peak, are not retrapped on other types of traps, associated with other glow peaks. Equations have been developed describing TL systems in which charges released from one type of trap may be retrapped in other types of traps. Called interactive kinetic equations, they are quite simple but have been studied by numerical methods. In particular, glow curves computed from the interactive kinetic equations have been regarded as data and analyzed by fitting them to the usual first and second order kinetic expressions. All of the anomalous features described above are reproduced. For example, usually the computed glow peaks are well fitted by the first and second order expressions over their upper 60 to 80% but not in the wings. This explains why the usual analysis methods, especially those utilizing peak temperature, full width, etc. appear to describe such peaks. Often unrealistic kinetic parameters are often obtained. Furthermore, the computed glow curves often reproduce the observed dependence on dose.

  19. Exact solutions of magnetohydrodynamics for describing different structural disturbances in solar wind

    NASA Astrophysics Data System (ADS)

    Grib, S. A.; Leora, S. N.

    2016-03-01

    We use analytical methods of magnetohydrodynamics to describe the behavior of cosmic plasma. This approach makes it possible to describe different structural fields of disturbances in solar wind: shock waves, direction discontinuities, magnetic clouds and magnetic holes, and their interaction with each other and with the Earth's magnetosphere. We note that the wave problems of solar-terrestrial physics can be efficiently solved by the methods designed for solving classical problems of mathematical physics. We find that the generalized Riemann solution particularly simplifies the consideration of secondary waves in the magnetosheath and makes it possible to describe in detail the classical solutions of boundary value problems. We consider the appearance of a fast compression wave in the Earth's magnetosheath, which is reflected from the magnetosphere and can nonlinearly overturn to generate a back shock wave. We propose a new mechanism for the formation of a plateau with protons of increased density and a magnetic field trough in the magnetosheath due to slow secondary shock waves. Most of our findings are confirmed by direct observations conducted on spacecrafts (WIND, ACE, Geotail, Voyager-2, SDO and others).

  20. Ligand-Induced Protein Responses and Mechanical Signal Propagation Described by Linear Response Theories

    PubMed Central

    Yang, Lee-Wei; Kitao, Akio; Huang, Bang-Chieh; Gō, Nobuhiro

    2014-01-01

    In this study, a general linear response theory (LRT) is formulated to describe time-dependent and -independent protein conformational changes upon CO binding with myoglobin. Using the theory, we are able to monitor protein relaxation in two stages. The slower relaxation is found to occur from 4.4 to 81.2 picoseconds and the time constants characterized for a couple of aromatic residues agree with those observed by UV Resonance Raman (UVRR) spectrometry and time resolved x-ray crystallography. The faster “early responses”, triggered as early as 400 femtoseconds, can be best described by the theory when impulse forces are used. The newly formulated theory describes the mechanical propagation following ligand-binding as a function of time, space and types of the perturbation forces. The “disseminators”, defined as the residues that propagate signals throughout the molecule the fastest among all the residues in protein when perturbed, are found evolutionarily conserved and the mutations of which have been shown to largely change the CO rebinding kinetics in myoglobin. PMID:25229149

  1. A High-Order Accurate Parallel Solver for Maxwell's Equations on Overlapping Grids

    SciTech Connect

    Henshaw, W D

    2005-09-23

    A scheme for the solution of the time dependent Maxwell's equations on composite overlapping grids is described. The method uses high-order accurate approximations in space and time for Maxwell's equations written as a second-order vector wave equation. High-order accurate symmetric difference approximations to the generalized Laplace operator are constructed for curvilinear component grids. The modified equation approach is used to develop high-order accurate approximations that only use three time levels and have the same time-stepping restriction as the second-order scheme. Discrete boundary conditions for perfect electrical conductors and for material interfaces are developed and analyzed. The implementation is optimized for component grids that are Cartesian, resulting in a fast and efficient method. The solver runs on parallel machines with each component grid distributed across one or more processors. Numerical results in two- and three-dimensions are presented for the fourth-order accurate version of the method. These results demonstrate the accuracy and efficiency of the approach.

  2. Accurate Cell Division in Bacteria: How Does a Bacterium Know Where its Middle Is?

    NASA Astrophysics Data System (ADS)

    Howard, Martin; Rutenberg, Andrew

    2004-03-01

    I will discuss the physical principles lying behind the acquisition of accurate positional information in bacteria. A good application of these ideas is to the rod-shaped bacterium E. coli which divides precisely at its cellular midplane. This positioning is controlled by the Min system of proteins. These proteins coherently oscillate from end to end of the bacterium. I will present a reaction-diffusion model that describes the diffusion of the Min proteins, and their binding/unbinding from the cell membrane. The system possesses an instability that spontaneously generates the Min oscillations, which control accurate placement of the midcell division site. I will then discuss the role of fluctuations in protein dynamics, and investigate whether fluctuations set optimal protein concentration levels. Finally I will examine cell division in a different bacteria, B. subtilis. where different physical principles are used to regulate accurate cell division. See: Howard, Rutenberg, de Vet: Dynamic compartmentalization of bacteria: accurate division in E. coli. Phys. Rev. Lett. 87 278102 (2001). Howard, Rutenberg: Pattern formation inside bacteria: fluctuations due to the low copy number of proteins. Phys. Rev. Lett. 90 128102 (2003). Howard: A mechanism for polar protein localization in bacteria. J. Mol. Biol. 335 655-663 (2004).

  3. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  4. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  5. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  6. Fast and accurate line scanner based on white light interferometry

    NASA Astrophysics Data System (ADS)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  7. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  8. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  9. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  10. Must Kohn-Sham oscillator strengths be accurate at threshold?

    SciTech Connect

    Yang Zenghui; Burke, Kieron; Faassen, Meta van

    2009-09-21

    The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.

  11. Determination of accurate dissociation limits and interatomic interactions at large internuclear distances

    NASA Astrophysics Data System (ADS)

    Stwalley, W. C.; Verma, K. K.; Rajaei-Rizi, A.; Bahns, J. T.; Harding, D. R.

    This paper illustrates (using the molecules LiH, Li2 and Na2) how laser-induced fluorescence can be used to greatly expand the range of observed vibrational levels in ground electronic states. This expanded vibrational range leads to the determination of virtually the full well of the potential energy curve. This also leads to improved determination of the dissociation limit and serves as a severe test for highly accurate ab initio calculations now available for many small molecules.

  12. Determination of accurate dissociation limits and interatomic interactions at large internuclear distances

    NASA Technical Reports Server (NTRS)

    Stwalley, W. C.; Verma, K. K.; Rajaei-Rizi, A.; Bahns, J. T.; Harding, D. R.

    1982-01-01

    This paper illustrates (using the molecules LiH, Li2 and Na2) how laser-induced fluorescence can be used to greatly expand the range of observed vibrational levels in ground electronic states. This expanded vibrational range leads to the determination of virtually the full well of the potential energy curve. This also leads to improved determination of the dissociation limit and serves as a severe test for highly accurate ab initio calculations now available for many small molecules.

  13. The Concerned Observer Experiment.

    ERIC Educational Resources Information Center

    Rabiger, Michael

    1991-01-01

    Describes a classroom experiment--the "concerned observer" experiment--for production students that dramatizes basic film language by relating it to several levels of human observation. Details the experiment's three levels, and concludes that film language mimics wide-ranging states of human emotion and ideological persuasion. (PRA)

  14. Generating Accurate Urban Area Maps from Nighttime Satellite (DMSP/OLS) Data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc; Lawrence, William; Elvidge, Christopher

    2000-01-01

    There has been an increasing interest by the international research community to use the nighttime acquired "city-lights" data sets collected by the US Defense Meteorological Satellite Program's Operational Linescan system to study issues relative to urbanization. Many researchers are interested in using these data to estimate human demographic parameters over large areas and then characterize the interactions between urban development , natural ecosystems, and other aspects of the human enterprise. Many of these attempts rely on an ability to accurately identify urbanized area. However, beyond the simple determination of the loci of human activity, using these data to generate accurate estimates of urbanized area can be problematic. Sensor blooming and registration error can cause large overestimates of urban land based on a simple measure of lit area from the raw data. We discuss these issues, show results of an attempt to do a historical urban growth model in Egypt, and then describe a few basic processing techniques that use geo-spatial analysis to threshold the DMSP data to accurately estimate urbanized areas. Algorithm results are shown for the United States and an application to use the data to estimate the impact of urban sprawl on sustainable agriculture in the US and China is described.

  15. Earth Observation

    NASA Technical Reports Server (NTRS)

    1994-01-01

    For pipeline companies, mapping, facilities inventory, pipe inspections, environmental reporting, etc. is a monumental task. An Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) is the solution. However, this is costly and time consuming. James W. Sewall Company, an AM/FM/GIS consulting firm proposed an EOCAP project to Stennis Space Center (SSC) to develop a computerized system for storage and retrieval of digital aerial photography. This would provide its customer, Algonquin Gas Transmission Company, with an accurate inventory of rights-of-way locations and pipeline surroundings. The project took four years to complete and an important byproduct was SSC's Digital Aerial Rights-of-Way Monitoring System (DARMS). DARMS saves substantial time and money. EOCAP enabled Sewall to develop new products and expand its customer base. Algonquin now manages regulatory requirements more efficiently and accurately. EOCAP provides government co-funding to encourage private investment in and broader use of NASA remote sensing technology. Because changes on Earth's surface are accelerating, planners and resource managers must assess the consequences of change as quickly and accurately as possible. Pacific Meridian Resources and NASA's Stennis Space Center (SSC) developed a system for monitoring changes in land cover and use, which incorporated the latest change detection technologies. The goal of this EOCAP project was to tailor existing technologies to a system that could be commercialized. Landsat imagery enabled Pacific Meridian to identify areas that had sustained substantial vegetation loss. The project was successful and Pacific Meridian's annual revenues have substantially increased. EOCAP provides government co-funding to encourage private investment in and broader use of NASA remote sensing technology.

  16. Monitoring circuit accurately measures movement of solenoid valve

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1966-01-01

    Solenoid operated valve in a control system powered by direct current issued to accurately measure the valve travel. This system is currently in operation with a 28-vdc power system used for control of fluids in liquid rocket motor test facilities.

  17. Second-order accurate nonoscillatory schemes for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  18. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  19. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  20. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…