Sample records for timing correcting quantity

  1. Method and apparatus for providing pulse pile-up correction in charge quantizing radiation detection systems

    DOEpatents

    Britton, Jr., Charles L.; Wintenberg, Alan L.

    1993-01-01

    A radiation detection method and system for continuously correcting the quantization of detected charge during pulse pile-up conditions. Charge pulses from a radiation detector responsive to the energy of detected radiation events are converted to voltage pulses of predetermined shape whose peak amplitudes are proportional to the quantity of charge of each corresponding detected event by means of a charge-sensitive preamplifier. These peak amplitudes are sampled and stored sequentially in accordance with their respective times of occurrence. Based on the stored peak amplitudes and times of occurrence, a correction factor is generated which represents the fraction of a previous pulses influence on a preceding pulse peak amplitude. This correction factor is subtracted from the following pulse amplitude in a summing amplifier whose output then represents the corrected charge quantity measurement.

  2. 75 FR 31332 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model EMB-120, -120ER...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ... correcting the fuel quantity indication system; as applicable. Compliance (f) You are responsible for having... correcting the fuel quantity indication system; as applicable. The MCAI does not provide a corrective action... unsafe condition as: It has been found that some fuel quantity probes may fail during the airplane life...

  3. Comment on ‘On the units radian and cycle for the quantity plane angle’

    NASA Astrophysics Data System (ADS)

    Leonard, B. P.

    2016-12-01

    In the referenced paper, Ian Mills addresses the confusion caused by the treatment of plane angle in the International System of Units (SI). As he points out, what the SI calls an ‘angle’ is not a dimensional physical quantity but, rather, the dimensionless numerical value of the angle when expressed in radians, thus creating widespread confusion regarding terminology and notation. For example, Mills shows that for the harmonic oscillator, if the conventional argument of the sinusoid represents an angle, it must be divided by a dimensional constant equal to one radian in order to correctly render it dimensionless, thereby greatly clarifying the notation. However, there is a problem with the author’s interpretation of frequency. Although, for uniform rotation, Mills correctly defines the revolution frequency as the number of complete revolutions, N, divided by the time interval, he takes the unit for N to be ‘cycle’ (which he defines as one revolution) rather than the correct unit: the number one. The unit for ‘frequency’ then appears to be ‘cycle per second’ (i.e. revolution per second), whereas it should be one per second, correctly called hertz. Thus Mills concludes that ‘frequency’ is the same physical quantity as angular velocity and calls for the ‘hertz’ to be redefined as 2π rad s-1, a non-coherent derived unit for angular velocity. This misinterpretation of frequency corrupts the remainder of the author’s discussion of the examples considered. In my comment, I explain and correct these and related errors.

  4. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  5. Loop corrections to primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  6. Fuel supply control method for internal combustion engines, with adaptability to various engines and controls therefor having different operating characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otobe, Y.; Chikamatsu, M.

    1988-03-08

    A method of controlling the fuel supply to an internal combustion engine is described, wherein a quantity of fuel for supply to the engine is determined by correcting a basic value of the quantity of fuel determined as a function of at least one operating parameter of the engine by correction values dependent upon operating conditions of the engine and the determined quantity of fuel is supplied to the engine. The method comprises the steps of: (1) detecting a value of at least one predetermined operating parameter of the engine; (2) manually adjusting a single voltage creating means to setmore » an output voltage therefrom to such a desired value as to compensate for deviation of the air/fuel ratio of a mixture supplied to the engine due to variations in operating characteristics of engines between different production lots or aging changes; (3) determining a value of the predetermined one correction value corresponding to the set desired value of output voltage of the single voltage creating means, and then modifying the thus determined value in response to the detected value of the predetermined at least one operating parameter of the engine during engine operation; and (4) correcting the basic value of the quantity of fuel by the value of the predetermined one correction value having the thus modified value, and the other correction values.« less

  7. Input/output models for general aviation piston-prop aircraft fuel economy

    NASA Technical Reports Server (NTRS)

    Sweet, L. M.

    1982-01-01

    A fuel efficient cruise performance model for general aviation piston engine airplane was tested. The following equations were made: (1) for the standard atmosphere; (2) airframe-propeller-atmosphere cruise performance; and (3) naturally aspirated engine cruise performance. Adjustments are made to the compact cruise performance model as follows: corrected quantities, corrected performance plots, algebraic equations, maximize R with or without constraints, and appears suitable for airborne microprocessor implementation. The following hardwares are recommended: ignition timing regulator, fuel-air mass ration controller, microprocessor, sensors and displays.

  8. Finite coupling corrections to holographic predictions for hot QCD

    DOE PAGES

    Waeber, Sebastian; Schafer, Andreas; Vuorinen, Aleksi; ...

    2015-11-13

    Finite ’t Hooft coupling corrections to multiple physical observables in strongly coupled N=4 supersymmetric Yang-Mills plasma are examined, in an attempt to assess the stability of the expansion in inverse powers of the ’t Hooft coupling λ. Observables considered include thermodynamic quantities, transport coefficients, and quasinormal mode frequencies. Furthermore large λ expansions for quasinormal mode frequencies are notably less well behaved than the expansions of other quantities, we find that a partial resummation of higher order corrections can significantly reduce the sensitivity of the results to the value of λ.

  9. Reliability of 3D laser-based anthropometry and comparison with classical anthropometry.

    PubMed

    Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Broda, Anja; Scholz, Markus

    2016-05-26

    Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.

  10. Systolic VLSI Reed-Solomon Decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.

    1986-01-01

    Decoder for digital communications provides high-speed, pipelined ReedSolomon (RS) error-correction decoding of data streams. Principal new feature of proposed decoder is modification of Euclid greatest-common-divisor algorithm to avoid need for time-consuming computations of inverse of certain Galois-field quantities. Decoder architecture suitable for implementation on very-large-scale integrated (VLSI) chips with negative-channel metaloxide/silicon circuitry.

  11. Trust Based Evaluation of Wikipedia's Contributors

    NASA Astrophysics Data System (ADS)

    Krupa, Yann; Vercouter, Laurent; Hübner, Jomi Fred; Herzig, Andreas

    Wikipedia is an encyclopedia on which anybody can change its content. Some users, self-proclaimed "patrollers", regularly check recent changes in order to delete or correct those which are ruining articles integrity. The huge quantity of updates leads some articles to remain polluted a certain time before being corrected. In this work, we show how a multiagent trust model can help patrollers in their task of controlling the Wikipedia. To direct the patrollers verification towards suspicious contributors, our work relies on a formalisation of Castelfranchi & Falcone's social trust theory to assist them by representing their trust model in a cognitive way.

  12. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE PAGES

    Burgess, C. P.; Holman, R.; Tasinato, G.

    2016-01-26

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  13. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgess, C. P.; Holman, R.; Tasinato, G.

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  14. Empirical Storm-Time Correction to the International Reference Ionosphere Model E-Region Electron and Ion Density Parameterizations Using Observations from TIMED/SABER

    NASA Technical Reports Server (NTRS)

    Mertens, Christoper J.; Winick, Jeremy R.; Russell, James M., III; Mlynczak, Martin G.; Evans, David S.; Bilitza, Dieter; Xu, Xiaojing

    2007-01-01

    The response of the ionospheric E-region to solar-geomagnetic storms can be characterized using observations of infrared 4.3 micrometers emission. In particular, we utilize nighttime TIMED/SABER measurements of broadband 4.3 micrometers limb emission and derive a new data product, the NO+(v) volume emission rate, which is our primary observation-based quantity for developing an empirical storm-time correction the IRI E-region electron density. In this paper we describe our E-region proxy and outline our strategy for developing the empirical storm model. In our initial studies, we analyzed a six day storm period during the Halloween 2003 event. The results of this analysis are promising and suggest that the ap-index is a viable candidate to use as a magnetic driver for our model.

  15. Altimeter waveform software design

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.; Miller, L. S.; Brown, G. S.

    1977-01-01

    Techniques are described for preprocessing raw return waveform data from the GEOS-3 radar altimeter. Topics discussed include: (1) general altimeter data preprocessing to be done at the GEOS-3 Data Processing Center to correct altimeter waveform data for temperature calibrations, to convert between engineering and final data units and to convert telemetered parameter quantities to more appropriate final data distribution values: (2) time "tagging" of altimeter return waveform data quantities to compensate for various delays, misalignments and calculational intervals; (3) data processing procedures for use in estimating spacecraft attitude from altimeter waveform sampling gates; and (4) feasibility of use of a ground-based reflector or transponder to obtain in-flight calibration information on GEOS-3 altimeter performance.

  16. Weighting factors for radiation quality: how to unite the two current concepts.

    PubMed

    Kellerer, Albrecht M

    2004-01-01

    The quality factor, Q(L), used to be the universal weighting factor to account for radiation quality, until--in its 1991 Recommendations--the ICRP established a dichotomy between 'computable' and 'measurable' quantities. The new concept of the radiation weighting factor, w(R), was introduced for use with the 'computable' quantities, such as the effective dose, E. At the same time, the application of Q(L) was restricted to 'measurable' quantities, such as the operational quantities ambient dose equivalent or personal dose equivalent. The result has been a dual system of incoherent dosimetric quantities. The most conspicuous inconsistency resulted for neutrons, for which the new concept of wR had been primarily designed. While its definition requires an accounting for the gamma rays produced by neutron capture in the human body, this effect is not adequately reflected in the numerical values of wR, which are now suitable for mice, but are--at energies of the incident neutrons below 1 MeV--conspicuously too large for man. A recent Report 92 to ICRP has developed a proposal to correct the current imbalance and to define a linkage between the concepts Q(L) and wR. The proposal is here considered within a broader assessment of the rationale that led to the current dual system of dosimetric quantities.

  17. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  18. Quantity Discrimination in Wolves (Canis lupus).

    PubMed

    Utrata, Ewelina; Virányi, Zsófia; Range, Friederike

    2012-01-01

    Quantity discrimination has been studied extensively in different non-human animal species. In the current study, we tested 11 hand-raised wolves (Canis lupus) in a two-way choice task. We placed a number of food items (one to four) sequentially into two opaque cans and asked the wolves to choose the larger amount. Moreover, we conducted two additional control conditions to rule out non-numerical properties of the presentation that the animals might have used to make the correct choice. Our results showed that wolves are able to make quantitative judgments at the group, but also at the individual level even when alternative strategies such as paying attention to the surface area or time and total amount are ruled out. In contrast to previous canine studies on dogs (Canis familiaris) and coyotes (Canis latrans), our wolves' performance did not improve with decreasing ratio, referred to as Weber's law. However, further studies using larger quantities than we used in the current set-up are still needed to determine whether and when wolves' quantity discrimination conforms to Weber's law.

  19. Quantity Discrimination in Wolves (Canis lupus)

    PubMed Central

    Utrata, Ewelina; Virányi, Zsófia; Range, Friederike

    2012-01-01

    Quantity discrimination has been studied extensively in different non-human animal species. In the current study, we tested 11 hand-raised wolves (Canis lupus) in a two-way choice task. We placed a number of food items (one to four) sequentially into two opaque cans and asked the wolves to choose the larger amount. Moreover, we conducted two additional control conditions to rule out non-numerical properties of the presentation that the animals might have used to make the correct choice. Our results showed that wolves are able to make quantitative judgments at the group, but also at the individual level even when alternative strategies such as paying attention to the surface area or time and total amount are ruled out. In contrast to previous canine studies on dogs (Canis familiaris) and coyotes (Canis latrans), our wolves’ performance did not improve with decreasing ratio, referred to as Weber’s law. However, further studies using larger quantities than we used in the current set-up are still needed to determine whether and when wolves’ quantity discrimination conforms to Weber’s law. PMID:23181044

  20. Expressions for IAU 2000 precession quantities

    NASA Astrophysics Data System (ADS)

    Capitaine, N.; Wallace, P. T.; Chapront, J.

    2003-12-01

    A new precession-nutation model for the Celestial Intermediate Pole (CIP) was adopted by the IAU in 2000 (Resolution B1.6). The model, designated IAU 2000A, includes a nutation series for a non-rigid Earth and corrections for the precession rates in longitude and obliquity. The model also specifies numerical values for the pole offsets at J2000.0 between the mean equatorial frame and the Geocentric Celestial Reference System (GCRS). In this paper, we discuss precession models consistent with IAU 2000A precession-nutation (i.e. MHB 2000, provided by Mathews et al. \\cite{Mathews02}) and we provide a range of expressions that implement them. The final precession model, designated P03, is a possible replacement for the precession component of IAU 2000A, offering improved dynamical consistency and a better basis for future improvement. As a preliminary step, we present our expressions for the currently used precession quantities zetaA, thetaA, zA, in agreement with the MHB corrections to the precession rates, that appear in the IERS Conventions 2000. We then discuss a more sophisticated method for improving the precession model of the equator in order that it be compliant with the IAU 2000A model. In contrast to the first method, which is based on corrections to the t terms of the developments for the precession quantities in longitude and obliquity, this method also uses corrections to their higher degree terms. It is essential that this be used in conjunction with an improved model for the ecliptic precession, which is expected, given the known discrepancies in the IAU 1976 expressions, to contribute in a significant way to these higher degree terms. With this aim in view, we have developed new expressions for the motion of the ecliptic with respect to the fixed ecliptic using the developments from Simon et al. (\\cite{Simon94}) and Williams (\\cite{Williams94}) and with improved constants fitted to the most recent numerical planetary ephemerides. We have then used these new expressions for the ecliptic together with the MHB corrections to precession rates to solve the precession equations for providing new solution for the precession of the equator that is dynamically consistent and compliant with IAU 2000. A number of perturbing effects have first been removed from the MHB estimates in order to get the physical quantities needed in the equations as integration constants. The equations have then been solved in a similar way to Lieske et al. (\\cite{Lieske77}) and Williams (\\cite{Williams94}), based on similar theoretical expressions for the contributions to precession rates, revised by using MHB values. Once improved expressions have been obtained for the precession of the ecliptic and the equator, we discuss the most suitable precession quantities to be considered in order to be based on the minimum number of variables and to be the best adapted to the most recent models and observations. Finally we provide developments for these quantities, denoted the P03 solution, including a revised Sidereal Time expression.

  1. A Gas Dynamics Method Based on The Spectral Deferred Corrections (SDC) Time Integration Technique and The Piecewise Parabolic Method (PPM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samet Y. Kadioglu

    2011-12-01

    We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less

  2. Driven Langevin systems: fluctuation theorems and faithful dynamics

    NASA Astrophysics Data System (ADS)

    Sivak, David; Chodera, John; Crooks, Gavin

    2014-03-01

    Stochastic differential equations of motion (e.g., Langevin dynamics) provide a popular framework for simulating molecular systems. Any computational algorithm must discretize these equations, yet the resulting finite time step integration schemes suffer from several practical shortcomings. We show how any finite time step Langevin integrator can be thought of as a driven, nonequilibrium physical process. Amended by an appropriate work-like quantity (the shadow work), nonequilibrium fluctuation theorems can characterize or correct for the errors introduced by the use of finite time steps. We also quantify, for the first time, the magnitude of deviations between the sampled stationary distribution and the desired equilibrium distribution for equilibrium Langevin simulations of solvated systems of varying size. We further show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  3. Impacts of Satellite Orbit and Clock on Real-Time GPS Point and Relative Positioning.

    PubMed

    Shi, Junbo; Wang, Gaojing; Han, Xianquan; Guo, Jiming

    2017-06-12

    Satellite orbit and clock corrections are always treated as known quantities in GPS positioning models. Therefore, any error in the satellite orbit and clock products will probably cause significant consequences for GPS positioning, especially for real-time applications. Currently three types of satellite products have been made available for real-time positioning, including the broadcast ephemeris, the International GNSS Service (IGS) predicted ultra-rapid product, and the real-time product. In this study, these three predicted/real-time satellite orbit and clock products are first evaluated with respect to the post-mission IGS final product, which demonstrates cm to m level orbit accuracies and sub-ns to ns level clock accuracies. Impacts of real-time satellite orbit and clock products on GPS point and relative positioning are then investigated using the P3 and GAMIT software packages, respectively. Numerical results show that the real-time satellite clock corrections affect the point positioning more significantly than the orbit corrections. On the contrary, only the real-time orbit corrections impact the relative positioning. Compared with the positioning solution using the IGS final product with the nominal orbit accuracy of ~2.5 cm, the real-time broadcast ephemeris with ~2 m orbit accuracy provided <2 cm relative positioning error for baselines no longer than 216 km. As for the baselines ranging from 574 to 2982 km, the cm-dm level positioning error was identified for the relative positioning solution using the broadcast ephemeris. The real-time product could result in <5 mm relative positioning accuracy for baselines within 2982 km, slightly better than the predicted ultra-rapid product.

  4. Corrigendum to "New Approaches to Inferences for Steep-Sided Domes on Venus" [J. Volcanol. Geotherm. Res. 319 (2016) 93-105

    NASA Technical Reports Server (NTRS)

    Quick, Lynnae C.; Glaze, Lori S.; Baloga, Stephen M.; Stofan, Ellen R.

    2017-01-01

    A typographical error contained in Quick et al. (2016) indicates the incorrect units for the value of the combined quantity (roh(exp. 3)o) that is the basis of Figs. 5, 6, and 7, and Tables 2 and 3. Using the values of ro and ho provided in Table 2, it can easily be shown that the combined quantity is correctly stated as roh(exp. 3)o =0.617 km(exp. 4). As correctly stated in Quick et al. (2016), the combined quantity of (roh(exp. 3)o) determines the family of curves shown in Fig. 5. The derivation of this relationship is shown below for completeness. Note that all results as reported in Quick et al. (2016) remained unchanged.

  5. Time scales in the context of general relativity.

    PubMed

    Guinot, Bernard

    2011-10-28

    Towards 1967, the accuracy of caesium frequency standards reached such a level that the relativistic effect could not be ignored anymore. Corrections began to be applied for the gravitational frequency shift and for distant time comparisons. However, these corrections were not applied to an explicit theoretical framework. Only in 1991 did the International Astronomical Union provide metrics (then improved in 2000) for a definition of space-time coordinates in reference systems centred at the barycentre of the Solar System and at the centre of mass of the Earth. In these systems, the temporal coordinates (coordinate times) can be realized on the basis of one of them, the International Atomic Time (TAI), which is itself a realized time scale. The definition and the role of TAI in this context will be recalled. There remain controversies regarding the name to be given to the unit of coordinate times and to other quantities appearing in the theory. However, the idea that astrometry and celestial mechanics should adopt the usual metrological rules is progressing, together with the use of the International System of Units, among astronomers.

  6. Correcting a Widespread Error concerning the Angular Velocity of a Rotating Rigid Body.

    ERIC Educational Resources Information Center

    Leubner, C.

    1981-01-01

    Since many texts use an incorrect argument in obtaining the instantaneous velocity of a rotating body, a correct and concise derivation of this quantity for a rather general case is given. (Author/SK)

  7. DSCOVR EPIC L2 Atmospheric Correction (MAIAC) Data Release Announcement

    Atmospheric Science Data Center

    2018-06-22

    ... several atmospheric quantities including cloud mask and aerosol optical depth (AOD) required for atmospheric correction. The parameters ... is a useful complementary dataset to MODIS and VIIRS global aerosol products.   Information about the DSCOVR EPIC Atmospheric ...

  8. 49 CFR 385.17 - Change to safety rating based upon corrective actions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... in CMVs or placardable quantities of hazardous materials. (2) Within 45 days for all other motor... under subpart J of this part based on corrective action. [65 FR 50935, Aug. 22, 2000, as amended at 72...

  9. 49 CFR 385.17 - Change to safety rating based upon corrective actions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... in CMVs or placardable quantities of hazardous materials. (2) Within 45 days for all other motor... under subpart J of this part based on corrective action. [65 FR 50935, Aug. 22, 2000, as amended at 72...

  10. Gravitational radiation quadrupole formula is valid for gravitationally interacting systems

    NASA Technical Reports Server (NTRS)

    Walker, M.; Will, C. M.

    1980-01-01

    An argument is presented for the validity of the quadrupole formula for gravitational radiation energy loss in the far field of nearly Newtonian (e.g., binary stellar) systems. This argument differs from earlier ones in that it determines beforehand the formal accuracy of approximation required to describe gravitationally self-interacting systems, uses the corresponding approximate equation of motion explicitly, and evaluates the appropriate asymptotic quantities by matching along the correct space-time light cones.

  11. OBT analysis method using polyethylene beads for limited quantities of animal tissue.

    PubMed

    Kim, S B; Stuart, M

    2015-08-01

    This study presents a polyethylene beads method for OBT determination in animal tissues and animal products for cases where the amount of water recovered by combustion is limited by sample size or quantity. In the method, the amount of water recovered after combustion is enhanced by adding tritium-free polyethylene beads to the sample prior to combustion in an oxygen bomb. The method reduces process time by allowing the combustion water to be easily collected with a pipette. Sufficient water recovery was achieved using the polyethylene beads method when 2 g of dry animal tissue or animal product were combusted with 2 g of polyethylene beads. Correction factors, which account for the dilution due to the combustion water of the beads, are provided for beef, chicken, pork, fish and clams, as well as egg, milk and cheese. The method was tested by comparing its OBT results with those of the conventional method using animal samples collected on the Chalk River Laboratories (CRL) site. The results determined that the polyethylene beads method added no more than 25% uncertainty when appropriate correction factors are used. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  12. Formation of hydrogen peroxide in the silver reductor: A micro-analytical method for iron

    USGS Publications Warehouse

    Fryling, C.F.; Tooley, F.V.

    1936-01-01

    1. An attempt to determine small quantities of iron by reduction with silver followed by titration with eerie sulfate revealed an error attributable to the formation of hydrogen peroxide in the reductor. 2. By conducting the reduction in an atmosphere of hydrogen, thereby decreasing the reductor correction, and applying a correction for the indicator, it was possible to determine quantities of iron of the order of 1.5 mg. with a high degree of accuracy. 3. The method was found to be relatively rapid and not to require the use of large platinum dishes, thus possessing advantages of practical value.

  13. Time-dependent observables in heavy ion collisions. Part II. In search of pressure isotropization in the φ 4 theory

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.; Wu, Bin

    2018-03-01

    To understand the dynamics of thermalization in heavy ion collisions in the perturbative framework it is essential to first find corrections to the free-streaming classical gluon fields of the McLerran-Venugopalan model. The corrections that lead to deviations from free streaming (and that dominate at late proper time) would provide evidence for the onset of isotropization (and, possibly, thermalization) of the produced medium. To find such corrections we calculate the late-time two-point Green function and the energy-momentum tensor due to a single 2 → 2 scattering process involving two classical fields. To make the calculation tractable we employ the scalar φ 4 theory instead of QCD. We compare our exact diagrammatic results for these quantities to those in kinetic theory and find disagreement between the two. The disagreement is in the dependence on the proper time τ and, for the case of the two-point function, is also in the dependence on the space-time rapidity η: the exact diagrammatic calculation is, in fact, consistent with the free streaming scenario. Kinetic theory predicts a build-up of longitudinal pressure, which, however, is not observed in the exact calculation. We conclude that we find no evidence for the beginning of the transition from the free-streaming classical fields to the kinetic theory description of the produced matter after a single 2 → 2 rescattering.

  14. The idiosyncratic nature of confidence

    PubMed Central

    Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

    2017-01-01

    Confidence is the ‘feeling of knowing’ that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence. PMID:29152591

  15. Simplified solution for osculating Keplerian parameter corrections of GEO satellites for intersatellite optical link

    NASA Astrophysics Data System (ADS)

    Yılmaz, Umit C.; Cavdar, Ismail H.

    2015-04-01

    In intersatellite optical communication, the Pointing, Acquisition and Tracking (PAT) phase is one of the important phases that needs to be completed successfully before initiating communication. In this paper, we focused on correcting the possible errors on the Geostationary Earth Orbit (GEO) by using azimuth and elevation errors between Low Earth Orbit (LEO) to GEO optical link during the PAT phase. To minimise the PAT duration, a simplified correction of longitude and inclination errors of the GEO satellite's osculating Keplerian parameters has been suggested. A simulation has been done considering the beaconless tracking and spiral-scanning technique. As a result, starting from the second day, we are able to reduce the uncertainty cone of the GEO satellite by about 200 μrad, if the values are larger than that quantity. The first day of the LEO-GEO links have been used to determine the parameters. Thanks to the corrections, the locking time onto the GEO satellite has been reduced, and more data are able to transmit to the GEO satellite.

  16. Achievement motivation and memory: achievement goals differentially influence immediate and delayed remember-know recognition memory.

    PubMed

    Murayama, Kou; Elliot, Andrew J

    2011-10-01

    Little research has been conducted on achievement motivation and memory and, more specifically, on achievement goals and memory. In the present research, the authors conducted two experiments designed to examine the influence of mastery-approach and performance-approach goals on immediate and delayed remember-know recognition memory. The experiments revealed differential effects for achievement goals over time: Performance-approach goals showed higher correct remember responding on an immediate recognition test, whereas mastery-approach goals showed higher correct remember responding on a delayed recognition test. Achievement goals had no influence on overall recognition memory and no consistent influence on know responding across experiments. These findings indicate that it is important to consider quality, not just quantity, in both motivation and memory, when studying relations between these constructs.

  17. SpcAudace: Spectroscopic processing and analysis package of Audela software

    NASA Astrophysics Data System (ADS)

    Mauclaire, Benjamin

    2017-11-01

    SpcAudace processes long slit spectra with automated pipelines and performs astrophysical analysis of the latter data. These powerful pipelines do all the required steps in one pass: standard preprocessing, masking of bad pixels, geometric corrections, registration, optimized spectrum extraction, wavelength calibration and instrumental response computation and correction. Both high and low resolution long slit spectra are managed for stellar and non-stellar targets. Many types of publication-quality figures can be easily produced: pdf and png plots or annotated time series plots. Astrophysical quantities can be derived from individual or large amount of spectra with advanced functions: from line profile characteristics to equivalent width and periodogram. More than 300 documented functions are available and can be used into TCL scripts for automation. SpcAudace is based on Audela open source software.

  18. COMPARISON OF EXPERIMENTS TO CFD MODELS FOR MIXING USING DUAL OPPOSING JETS IN TANKS WITH AND WITHOUT INTERNAL OBSTRUCTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leishear, R.; Poirier, M.; Lee, S.

    2012-06-26

    This paper documents testing methods, statistical data analysis, and a comparison of experimental results to CFD models for blending of fluids, which were blended using a single pump designed with dual opposing nozzles in an eight foot diameter tank. Overall, this research presents new findings in the field of mixing research. Specifically, blending processes were clearly shown to have random, chaotic effects, where possible causal factors such as turbulence, pump fluctuations, and eddies required future evaluation. CFD models were shown to provide reasonable estimates for the average blending times, but large variations -- or scatter -- occurred for blending timesmore » during similar tests. Using this experimental blending time data, the chaotic nature of blending was demonstrated and the variability of blending times with respect to average blending times were shown to increase with system complexity. Prior to this research, the variation in blending times caused discrepancies between CFD models and experiments. This research addressed this discrepancy, and determined statistical correction factors that can be applied to CFD models, and thereby quantified techniques to permit the application of CFD models to complex systems, such as blending. These blending time correction factors for CFD models are comparable to safety factors used in structural design, and compensate variability that cannot be theoretically calculated. To determine these correction factors, research was performed to investigate blending, using a pump with dual opposing jets which re-circulate fluids in the tank to promote blending when fluids are added to the tank. In all, eighty-five tests were performed both in a tank without internal obstructions and a tank with vertical obstructions similar to a tube bank in a heat exchanger. These obstructions provided scale models of vertical cooling coils below the liquid surface for a full scale, liquid radioactive waste storage tank. Also, different jet diameters and different horizontal orientations of the jets were investigated with respect to blending. Two types of blending tests were performed. The first set of eighty-one tests blended small quantities of tracer fluids into solution. Data from these tests were statistically evaluated to determine blending times for the addition of tracer solution to tanks, and blending times were successfully compared to Computational Fluid Dynamics (CFD) models. The second set of four tests blended bulk quantities of solutions of different density and viscosity. For example, in one test a quarter tank of water was added to a three quarters of a tank of a more viscous salt solution. In this case, the blending process was noted to significantly change due to stratification of fluids, and blending times increased substantially. However, CFD models for stratification and the variability of blending times for different density fluids was not pursued, and further research is recommended in the area of blending bulk quantities of fluids. All in all, testing showed that CFD models can be effectively applied if statistically validated through experimental testing, but in the absence of experimental validation CFD model scan be extremely misleading as a basis for design and operation decisions.« less

  19. A Predictive Safety Management System Software Package Based on the Continuous Hazard Tracking and Failure Prediction Methodology

    NASA Technical Reports Server (NTRS)

    Quintana, Rolando

    2003-01-01

    The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.

  20. MRS proof-of-concept on atmospheric corrections. Atmospheric corrections using an orbital pointable imaging system

    NASA Technical Reports Server (NTRS)

    Slater, P. N. (Principal Investigator)

    1980-01-01

    The feasibility of using a pointable imager to determine atmospheric parameters was studied. In particular the determination of the atmospheric extinction coefficient and the path radiance, the two quantities that have to be known in order to correct spectral signatures for atmospheric effects, was simulated. The study included the consideration of the geometry of ground irradiance and observation conditions for a pointable imager in a LANDSAT orbit as a function of time of year. A simulation study was conducted on the sensitivity of scene classification accuracy to changes in atmospheric condition. A two wavelength and a nonlinear regression method for determining the required atmospheric parameters were investigated. The results indicate the feasibility of using a pointable imaging system (1) for the determination of the atmospheric parameters required to improve classification accuracies in urban-rural transition zones and to apply in studies of bi-directional reflectance distribution function data and polarization effects; and (2) for the determination of the spectral reflectances of ground features.

  1. Wealth and price distribution by diffusive approximation in a repeated prediction market

    NASA Astrophysics Data System (ADS)

    Bottazzi, Giulio; Giachini, Daniele

    2017-04-01

    The approximate agents' wealth and price invariant densities of a repeated prediction market model is derived using the Fokker-Planck equation of the associated continuous-time jump process. We show that the approximation obtained from the evolution of log-wealth difference can be reliably exploited to compute all the quantities of interest in all the acceptable parameter space. When the risk aversion of the trader is high enough, we are able to derive an explicit closed-form solution for the price distribution which is asymptotically correct.

  2. Detection and quantification of intraperitoneal fluid using electrical impedance tomography.

    PubMed

    Sadleir, R J; Fox, R A

    2001-04-01

    A prototype electrical impedance tomography system was evaluated prior to its use for the detection of intraperitoneal bleeding, with the assistance of patients undergoing continuous ambulatory peritoneal dialysis (CAPD). The system was sensitive enough to detect small amounts of dialysis fluid appearing in subtractive images over short time periods. Uniform sensitivity to blood appearing anywhere within the abdominal cavity was produced using a post-reconstructive filter that corrected for changes in apparent resistivity of anomalies with their radial position. The image parameter used as an indication of fluid quantity, the resistivity index, varied approximately linearly with the quantity of fluid added. A test of the system's response to the introduction of conductive fluid out of the electrode plane (when a blood-equivalent fluid was added to the stomach) found that the sensitivity of the system was about half that observed in the electrode plane. Breathing artifacts were found to upset quantitative monitoring of intraperitoneal bleeding, but only on time scales short compared with the fluid administration rate. Longer term breathing changes, such as those due to variations in the functional residual capacity of the lungs, should ultimately limit the sensitivity over long time periods.

  3. Brief note on Ashtekar-Magnon-Das conserved quantities in quadratic curvature theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pang Yi

    2011-04-15

    In this note, we correct a mistake in the mass formula in [N. Okuyama and J. i. Koga, Phys. Rev. D 71, 084009 (2005).] which generalizes the Ashtekar-Magnon-Das method to incorporate extended gravities with quadratic curvature terms. The corrected mass formula confirms that the black hole masses for recently discovered critical gravities vanish.

  4. Pressure beneath the Surface of a Fluid: Measuring the Correct Depth

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2013-01-01

    Systematic errors can cause measurements to deviate from the actual value of the quantity being measured. Faulty equipment (such as a meterstick that is not marked correctly), inaccurate calibration of measuring devices (such as a scale to measure mass that has not been properly zeroed), and improper use of equipment by the experimenter (such as…

  5. Quantum corrections in thermal states of fermions on anti-de Sitter space-time

    NASA Astrophysics Data System (ADS)

    Ambruş, Victor E.; Winstanley, Elizabeth

    2017-12-01

    We study the energy density and pressure of a relativistic thermal gas of massless fermions on four-dimensional Minkowski and anti-de Sitter space-times using relativistic kinetic theory. The corresponding quantum field theory quantities are given by components of the renormalized expectation value of the stress-energy tensor operator acting on a thermal state. On Minkowski space-time, the renormalized vacuum expectation value of the stress-energy tensor is by definition zero, while on anti-de Sitter space-time the vacuum contribution to this expectation value is in general nonzero. We compare the properties of the vacuum and thermal expectation values of the energy density and pressure for massless fermions and discuss the circumstances in which the thermal contribution dominates over the vacuum one.

  6. Evaluation of an autoclave resistant anatomic nose model for the testing of nasal swabs

    PubMed Central

    Bartolitius, Lennart; Warnke, Philipp; Ottl, Peter; Podbielski, Andreas

    2014-01-01

    A nose model that allows for the comparison of different modes of sample acquisition as well as of nasal swab systems concerning their suitability to detect defined quantities of intranasal microorganisms, and further for training procedures of medical staff, was evaluated. Based on an imprint of a human nose, a model made of a silicone elastomer was formed. Autoclave stability was assessed. Using an inoculation suspension containing Staphylococcus aureus and Staphylococcus epidermidis, the model was compared with standardized glass plate inoculations. Effects of inoculation time, mode of sampling, and sample storage time were assessed. The model was stable to 20 autoclaving cycles. There were no differences regarding the optimum coverage from the nose and from glass plates. Optimum sampling time was 1 h after inoculation. Storage time after sampling was of minor relevance for the recovery. Rotating the swab around its own axis while circling the nasal cavity resulted in best sampling results. The suitability of the assessed nose model for the comparison of sampling strategies and systems was confirmed. Without disadvantages in comparison with sampling from standardized glass plates, the model allows for the assessment of a correct sampling technique due to its anatomically correct shape. PMID:25215192

  7. Evaluation of an autoclave resistant anatomic nose model for the testing of nasal swabs.

    PubMed

    Bartolitius, Lennart; Frickmann, Hagen; Warnke, Philipp; Ottl, Peter; Podbielski, Andreas

    2014-09-01

    A nose model that allows for the comparison of different modes of sample acquisition as well as of nasal swab systems concerning their suitability to detect defined quantities of intranasal microorganisms, and further for training procedures of medical staff, was evaluated. Based on an imprint of a human nose, a model made of a silicone elastomer was formed. Autoclave stability was assessed. Using an inoculation suspension containing Staphylococcus aureus and Staphylococcus epidermidis, the model was compared with standardized glass plate inoculations. Effects of inoculation time, mode of sampling, and sample storage time were assessed. The model was stable to 20 autoclaving cycles. There were no differences regarding the optimum coverage from the nose and from glass plates. Optimum sampling time was 1 h after inoculation. Storage time after sampling was of minor relevance for the recovery. Rotating the swab around its own axis while circling the nasal cavity resulted in best sampling results. The suitability of the assessed nose model for the comparison of sampling strategies and systems was confirmed. Without disadvantages in comparison with sampling from standardized glass plates, the model allows for the assessment of a correct sampling technique due to its anatomically correct shape.

  8. A method for quantitative analysis of standard and high-throughput qPCR expression data based on input sample quantity.

    PubMed

    Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E

    2014-01-01

    Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.

  9. Scalar and tensor perturbations in loop quantum cosmology: high-order corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Tao; Wang, Anzhong; Wu, Qiang

    2015-10-01

    Loop quantum cosmology (LQC) provides promising resolutions to the trans-Planckian issue and initial singularity arising in the inflationary models of general relativity. In general, due to different quantization approaches, LQC involves two types of quantum corrections, the holonomy and inverse-volume, to both of the cosmological background evolution and perturbations. In this paper, using the third-order uniform asymptotic approximations, we derive explicitly the observational quantities of the slow-roll inflation in the framework of LQC with these quantum corrections. We calculate the power spectra, spectral indices, and running of the spectral indices for both scalar and tensor perturbations, whereby the tensor-to-scalar ratiomore » is obtained. We expand all the observables at the time when the inflationary mode crosses the Hubble horizon. As the upper error bounds for the uniform asymptotic approximation at the third-order are ∼< 0.15%, these results represent the most accurate results obtained so far in the literature. It is also shown that with the inverse-volume corrections, both scalar and tensor spectra exhibit a deviation from the usual shape at large scales. Then, using the Planck, BAO and SN data we obtain new constraints on quantum gravitational effects from LQC corrections, and find that such effects could be within the detection of the forthcoming experiments.« less

  10. Correction Approach for Delta Function Convolution Model Fitting of Fluorescence Decay Data in the Case of a Monoexponential Reference Fluorophore.

    PubMed

    Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris

    2015-09-01

    A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.

  11. Next Generation MODTRAN for Improved Atmospheric Correction of Spectral Imagery

    DTIC Science & Technology

    2016-01-29

    DoD operational and research sensor and data processing systems, particularly those involving the removal of atmospheric effects, commonly referred...atmospheric correction process. Given the ever increasing capabilities of spectral sensors to quickly generate enormous quantities of data, combined...many DoD operational and research sensor and data processing systems, particularly those involving the removal of atmospheric effects, commonly

  12. Microstructurally-sensitive fatigue crack nucleation in Ni-based single and oligo crystals

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Jiang, Jun; Dunne, Fionn P. E.

    2017-09-01

    An integrated experimental, characterisation and computational crystal plasticity study of cyclic plastic beam loading has been carried out for nickel single crystal (CMSX4) and oligocrystal (MAR002) alloys in order to assess quantitatively the mechanistic drivers for fatigue crack nucleation. The experimentally validated modelling provides knowledge of key microstructural quantities (accumulated slip, stress and GND density) at experimentally observed fatigue crack nucleation sites and it is shown that while each of these quantities is potentially important in crack nucleation, none of them in its own right is sufficient to be predictive. However, the local (elastic) stored energy density, measured over a length scale determined by the density of SSDs and GNDs, has been shown to predict crack nucleation sites in the single and oligocrystals tests. In addition, once primary nucleated cracks develop and are represented in the crystal model using XFEM, the stored energy correctly identifies where secondary fatigue cracks are observed to nucleate in experiments. This (Griffith-Stroh type) quantity also correctly differentiates and explains intergranular and transgranular fatigue crack nucleation.

  13. Fluid therapy in mature cattle.

    PubMed

    Roussel, Allen J

    2014-07-01

    Fluid therapy for mature cattle differs from that for calves because the common conditions that result in dehydration and the metabolic derangements that accompany these conditions are different. The veterinarian needs to know which problem exists, what to administer to correct the problem, in what quantity, by what route, and at what rate. Mature cattle more frequently suffer from alkalosis; therefore, acidifying solutions containing K(+) and Cl(-) in concentrations greater than that of plasma are frequently indicated. The rumen provides a large-capacity reservoir into which oral rehydration solutions may be administered, which can save time and money. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Improved Line Tracing Methods for Removal of Bad Streaks Noise in CCD Line Array Image—A Case Study with GF-1 Images

    PubMed Central

    Wang, Bo; Bao, Jianwei; Wang, Shikui; Wang, Houjun; Sheng, Qinghong

    2017-01-01

    Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is proposed to correct the stripes and bad abnormal pixels in charge-coupled device (CCD) linear array images. The method involved a line tracing method, limiting the location of noise to a rectangular region, and corrected abnormal pixels with the Lagrange polynomial algorithm. The proposed detection and restoration method were applied to Gaofen-1 satellite (GF-1) images, and the performance of this method was evaluated by omission ratio and false detection ratio, which reached 0.6% and 0%, respectively. This method saved 55.9% of the time, compared with traditional method. PMID:28441754

  15. DSCOVR_EPIC_L2_MAIAC_01

    Atmospheric Science Data Center

    2018-06-25

    ... several atmospheric quantities including cloud mask and aerosol optical depth (AOD) required for atmospheric correction. The parameters ... Project Title:  DSCOVR Discipline:  Aerosol Clouds Version:  V1 Level:  L2 ...

  16. Efficient star formation in the spiral arms of M51

    NASA Technical Reports Server (NTRS)

    Lord, Steven D.; Young, Judith S.

    1990-01-01

    The molecular, neutral, and ionized hydrogen distributions in the Sbc galaxy M51 (NGC 5194) are compared. To estimate H2 surface densities observations of the CO (J = 1 - 0) transition were made in 60 positions out to a radius of 155 arcsec. Extinction-corrected H-alpha intensities were used to compute the detailed massive star formation rates (MSFRs) in the disk. Estimates of the gas surface density, the MSFR, and the ratio of these quantities, MSFR/sigma(p), were then examined. The spiral arms were found to exhibit an excess gas density, measuring between 1.4 and 1.6 times the interarm values at 45 arcsec resolution. The total (arm and interarm) gas content and massive star formation rates in concentric annuli in the disk of M51 were computed. The two quantities fall off together with radius, yielding a relatively constant MSFR/sigma(p) with radius. This behavior is not explained by current models of star formation in galactic disks.

  17. Improving Robotic Assembly of Planar High Energy Density Targets

    NASA Astrophysics Data System (ADS)

    Dudt, D.; Carlson, L.; Alexander, N.; Boehm, K.

    2016-10-01

    Increased quantities of planar assemblies for high energy density targets are needed with higher shot rates being implemented at facilities such as the National Ignition Facility and the Matter in Extreme Conditions station of the Linac Coherent Light Source. To meet this growing demand, robotics are used to reduce assembly time. This project studies how machine vision and force feedback systems can be used to improve the quantity and quality of planar target assemblies. Vision-guided robotics can identify and locate parts, reducing laborious manual loading of parts into precision pallets and associated teaching of locations. On-board automated inspection can measure part pickup offsets to correct part drop-off placement into target assemblies. Force feedback systems can detect pickup locations and apply consistent force to produce more uniform glue bond thickness, thus improving the performance of the targets. System designs and performance evaluations will be presented. Work supported in part by the US DOE under the Science Undergraduate Laboratory Internships Program (SULI) and ICF Target Fabrication DE-NA0001808.

  18. A Correlation Between the Intrinsic Brightness and Average Decay Rate of Gamma-Ray Burst X-Ray Afterglow Light Curves

    NASA Technical Reports Server (NTRS)

    Racusin, J. L.; Oates, S. R.; De Pasquale, M.; Kocevski, D.

    2016-01-01

    We present a correlation between the average temporal decay (alpha X,avg, greater than 200 s) and early-time luminosity (LX,200 s) of X-ray afterglows of gamma-ray bursts as observed by the Swift X-ray Telescope. Both quantities are measured relative to a rest-frame time of 200 s after the gamma-ray trigger. The luminosity â€" average decay correlation does not depend on specific temporal behavior and contains one scale-independent quantity minimizing the role of selection effects. This is a complementary correlation to that discovered by Oates et al. in the optical light curves observed by the Swift Ultraviolet Optical Telescope. The correlation indicates that, on average, more luminous X-ray afterglows decay faster than less luminous ones, indicating some relative mechanism for energy dissipation. The X-ray and optical correlations are entirely consistent once corrections are applied and contamination is removed. We explore the possible biases introduced by different light-curve morphologies and observational selection effects, and how either geometrical effects or intrinsic properties of the central engine and jet could explain the observed correlation.

  19. The free energy of a reaction coordinate at multiple constraints: a concise formulation

    NASA Astrophysics Data System (ADS)

    Schlitter, Jürgen; Klähn, Marco

    The free energy as a function of the reaction coordinate (rc) is the key quantity for the computation of equilibrium and kinetic quantities. When it is considered as the potential of mean force, the problem is the calculation of the mean force for given values of the rc. We reinvestigate the PMCF (potential of mean constraint force) method which applies a constraint to the rc to compute the mean force as the mean negative constraint force and a metric tensor correction. The latter allows for the constraint imposed to the rc and possible artefacts due to multiple constraints of other variables which for practical reasons are often used in numerical simulations. Two main results are obtained that are of theoretical and practical interest. First, the correction term is given a very concise and simple shape which facilitates its interpretation and evaluation. Secondly, a theorem describes various rcs and possible combinations with constraints that can be used without introducing any correction to the constraint force. The results facilitate the computation of free energy by molecular dynamics simulations.

  20. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  1. Numerical investigation of finite-volume effects for the HVP

    NASA Astrophysics Data System (ADS)

    Boyle, Peter; Gülpers, Vera; Harrison, James; Jüttner, Andreas; Portelli, Antonin; Sachrajda, Christopher

    2018-03-01

    It is important to correct for finite-volume (FV) effects in the presence of QED, since these effects are typically large due to the long range of the electromagnetic interaction. We recently made the first lattice calculation of electromagnetic corrections to the hadronic vacuum polarisation (HVP). For the HVP, an analytical derivation of FV corrections involves a two-loop calculation which has not yet been carried out. We instead calculate the universal FV corrections numerically, using lattice scalar QED as an effective theory. We show that this method gives agreement with known analytical results for scalar mass FV effects, before applying it to calculate FV corrections for the HVP. This method for numerical calculation of FV effects is also widely applicable to quantities beyond the HVP.

  2. Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model

    NASA Astrophysics Data System (ADS)

    Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.

    2017-12-01

    In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.

  3. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Favalli, Andrea

    2017-10-01

    Neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.

  4. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  5. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE PAGES

    Croft, Stephen; Favalli, Andrea

    2017-07-16

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  6. Analysis of Solar Astrolabe Measurements during 20 Years

    NASA Astrophysics Data System (ADS)

    Poppe, P. C. R.; Leister, N. V.; Laclare, F.; Delmas, C.

    1998-11-01

    Recent observations of the Sun made between 1974 and 1995 at two observatories were examined to determine the constant and/or linear terms to the equinox and equator of the FK5 reference frame, the mean obliquity of the ecliptic, the mean longitude of the Sun, the mean eccentricity of the Earth's orbit, and the mean longitude of perihelion. The VSOP82 theory was used to reduce the data. The global solution of the weighted least-squares adjustment shows that the equinox of the FK5 requires a correction of +0.072" +/- 0.005" at the mean epoch 1987.24. The FK5 and dynamical equinox agree closely at J2000.0 (-0.040" +/- 0.020"), but an anomalous negative secular variation with respect to the dynamical equinox was detected: -0.881" +/- 0.116" century^-1. The FK5 equator requires a correction of +0.088" +/- 0.016", and there is no indication of a time rate of change. The corrections to the mean longitude of the Sun (-0.020" +/- 0.010") and to the mean obliquity of the ecliptic (-0.041" +/- 0.016") do appear to be statistically significant, although only marginally. The time rates of change for these quantities are not significant on the system to which the observations are referred. In spite of the short time span used in this analysis, the strong correlation between constant and linear terms was completely eliminated with the complete covering of the orbit by the data sets of both sites.

  7. Determining titan's spin state from cassini radar images

    USGS Publications Warehouse

    Stiles, B.W.; Kirk, R.L.; Lorenz, R.D.; Hensley, S.; Lee, E.; Ostro, S.J.; Allison, M.D.; Callahan, P.S.; Gim, Y.; Iess, L.; Del Marmo, P.P.; Hamilton, G.; Johnson, W.T.K.; West, R.D.

    2008-01-01

    For some 19 areas of Titan's surface, the Cassini RADAR instrument has obtained synthetic aperture radar (SAR) images during two different flybys. The time interval between flybys varies from several weeks to two years. We have used the apparent misregistration (by 10-30 km) of features between separate flybys to construct a refined model of Titan's spin state, estimating six parameters: north pole right ascension and declination, spin rate, and these quantities' first time derivatives We determine a pole location with right ascension of 39.48 degrees and declination of 83.43 degrees corresponding to a 0.3 degree obliquity. We determine the spin rate to be 22.5781 deg day -1 or 0.001 deg day-1 faster than the synchronous spin rate. Our estimated corrections to the pole and spin rate exceed their corresponding standard errors by factors of 80 and 8, respectively. We also found that the rate of change in the pole right ascension is -30 deg century-1, ten times faster than right ascension rate of change for the orbit normal. The spin rate is increasing at a rate of 0.05 deg day -1 per century. We observed no significant change in pole declination over the period for which we have data. Applying our pole correction reduces the feature misregistration from tens of km to 3 km. Applying the spin rate and derivative corrections further reduces the misregistration to 1.2 km. ?? 2008. The American Astronomical Society. All rights reserved.

  8. Optical radiation measurements: instrumentation and sources of error.

    PubMed

    Landry, R J; Andersen, F A

    1982-07-01

    Accurate measurement of optical radiation is required when sources of this radiation are used in biological research. The most difficult measurements of broadband noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, physical quantities and units, measurement systems with instruments, and sources of error and uncertainties associated with optical radiation measurements are reviewed.

  9. Determination of the fast-neutron-induced fission cross-section of 242Pu at nELBE

    NASA Astrophysics Data System (ADS)

    Kögler, Toni; Beyer, Roland; Junghans, Arnd R.; Schwengner, Ronald; Wagner, Andreas

    2018-03-01

    The fast-neutron-induced fission cross section of 242Pu was determined in the energy range of 0.5 MeV to 10MeV at the neutron time-of-flight facility nELBE. Using a parallel-plate fission ionization chamber this quantity was measured relative to 235U(n,f). The number of target nuclei was thereby calculated by means of measuring the spontaneous fission rate of 242Pu. An MCNP 6 neutron transport simulation was used to correct the relative cross section for neutron scattering. The determined results are in good agreement with current experimental and evaluated data sets.

  10. Thermodynamic instability of topological black holes in Gauss-Bonnet gravity with a generalized electrodynamics

    NASA Astrophysics Data System (ADS)

    Hendi, S. H.; Panahiyan, S.

    2014-12-01

    Motivated by the string corrections on the gravity and electrodynamics sides, we consider a quadratic Maxwell invariant term as a correction of the Maxwell Lagrangian to obtain exact solutions of higher dimensional topological black holes in Gauss-Bonnet gravity. We first investigate the asymptotically flat solutions and obtain conserved and thermodynamic quantities which satisfy the first law of thermodynamics. We also analyze thermodynamic stability of the solutions by calculating the heat capacity and the Hessian matrix. Then, we focus on horizon-flat solutions with an anti-de Sitter (AdS) asymptote and produce a rotating spacetime with a suitable transformation. In addition, we calculate the conserved and thermodynamic quantities for asymptotically AdS black branes which satisfy the first law of thermodynamics. Finally, we perform thermodynamic instability criterion to investigate the effects of nonlinear electrodynamics in canonical and grand canonical ensembles.

  11. PDT dose dosimetry for Photofrin-mediated pleural photodynamic therapy (pPDT)

    NASA Astrophysics Data System (ADS)

    Ong, Yi Hong; Kim, Michele M.; Finlay, Jarod C.; Dimofte, Andreea; Singhal, Sunil; Glatstein, Eli; Cengel, Keith A.; Zhu, Timothy C.

    2018-01-01

    Photosensitizer fluorescence excited by photodynamic therapy (PDT) treatment light can be used to monitor the in vivo concentration of the photosensitizer and its photobleaching. The temporal integral of the product of in vivo photosensitizer concentration and light fluence is called PDT dose, which is an important dosimetry quantity for PDT. However, the detected photosensitizer fluorescence may be distorted by variations in the absorption and scattering of both excitation and fluorescence light in tissue. Therefore, correction of the measured fluorescence for distortion due to variable optical properties is required for absolute quantification of photosensitizer concentration. In this study, we have developed a four-channel PDT dose dosimetry system to simultaneously acquire light dosimetry and photosensitizer fluorescence data. We measured PDT dose at four sites in the pleural cavity during pleural PDT. We have determined an empirical optical property correction function using Monte Carlo simulations of fluorescence for a range of physiologically relevant tissue optical properties. Parameters of the optical property correction function for Photofrin fluorescence were determined experimentally using tissue-simulating phantoms. In vivo measurements of photosensitizer fluorescence showed negligible photobleaching of Photofrin during the PDT treatment, but large intra- and inter-patient heterogeneities of in vivo Photofrin concentration are observed. PDT doses delivered to 22 sites in the pleural cavity of 8 patients were different by 2.9 times intra-patient and 8.3 times inter-patient.

  12. The Hartree product and the description of local and global quantities in atomic systems: A study within Kohn-Sham theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Jorge; Nichols, Jeffrey A.; Dixon, David A.

    2000-01-15

    The Hartree product is analyzed in the context of Kohn-Sham theory. The differential equations that emerge from this theory are solved with the optimized effective potential using the Krieger, Li, and Iafrate approximation, in order to get a local potential as required by the ordinary Kohn-Sham procedure. Because the diagonal terms of the exact exchange energy are included in Hartree theory, it is self-interaction free and the exchange potential has the proper asymptotic behavior. We have examined the impact of this correct asymptotic behavior on local and global properties using this simple model to approximate the exchange energy. Local quantities,more » such as the exchange potential and the average local electrostatic potential are used to examine whether the shell structure in an atom is revealed by this theory. Global quantities, such as the highest occupied orbital energy (related to the ionization potential) and the exchange energy are also calculated. These quantities are contrasted with those obtained from calculations with the local density approximation, the generalized gradient approximation, and the self-interaction correction approach proposed by Perdew and Zunger. We conclude that the main characteristics in an atomic system are preserved with the Hartree theory. In particular, the behavior of the exchange potential obtained in this theory is similar to those obtained within other Kohn-Sham approximations. (c) 2000 American Institute of Physics.« less

  13. Influence of Misalignment on High-Order Aberration Correction for Normal Human Eyes

    NASA Astrophysics Data System (ADS)

    Zhao, Hao-Xin; Xu, Bing; Xue, Li-Xia; Dai, Yun; Liu, Qian; Rao, Xue-Jun

    2008-04-01

    Although a compensation device can correct aberrations of human eyes, the effect will be degraded by its misalignment, especially for high-order aberration correction. We calculate the positioning tolerance of correction device for high-order aberrations, and within what degree the correcting effect is better than low-order aberration (defocus and astigmatism) correction. With fixed certain misalignment within the positioning tolerance, we calculate the residual wavefront rms aberration of the first-6 to first-35 terms along with the 3rd-5th terms of aberrations corrected, and the combined first-13 terms of aberrations are also studied under the same quantity of misalignment. However, the correction effect of high-order aberrations does not meliorate along with the increase of the high-order terms under some misalignment, moreover, some simple combined terms correction can achieve similar result as complex combinations. These results suggest that it is unnecessary to correct too much the terms of high-order aberrations which are difficult to accomplish in practice, and gives confidence to correct high-order aberrations out of the laboratory.

  14. 49 CFR 385.17 - Change to safety rating based upon corrective actions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... in CMVs or placardable quantities of hazardous materials. (2) Within 45 days for all other motor..., it shall remain in effect during the period of any administrative review. [65 FR 50935, Aug. 22, 2000...

  15. 40 CFR 98.434 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Contained in Pre-Charged Equipment or Closed-Cell Foams § 98.434 Monitoring and QA/QC requirements. (a) For... equipment or closed-cell foam in the correct quantities (metric tons) and units (kg per piece of equipment...

  16. 40 CFR 98.434 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Contained in Pre-Charged Equipment or Closed-Cell Foams § 98.434 Monitoring and QA/QC requirements. (a) For... equipment or closed-cell foam in the correct quantities (metric tons) and units (kg per piece of equipment...

  17. 40 CFR 98.434 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Contained in Pre-Charged Equipment or Closed-Cell Foams § 98.434 Monitoring and QA/QC requirements. (a) For... equipment or closed-cell foam in the correct quantities and units. [74 FR 56374, Oct. 30, 2009, as amended...

  18. On patterns and re-use in bioinformatics databases.

    PubMed

    Bell, Michael J; Lord, Phillip

    2017-09-01

    As the quantity of data being depositing into biological databases continues to increase, it becomes ever more vital to develop methods that enable us to understand this data and ensure that the knowledge is correct. It is widely-held that data percolates between different databases, which causes particular concerns for data correctness; if this percolation occurs, incorrect data in one database may eventually affect many others while, conversely, corrections in one database may fail to percolate to others. In this paper, we test this widely-held belief by directly looking for sentence reuse both within and between databases. Further, we investigate patterns of how sentences are reused over time. Finally, we consider the limitations of this form of analysis and the implications that this may have for bioinformatics database design. We show that reuse of annotation is common within many different databases, and that also there is a detectable level of reuse between databases. In addition, we show that there are patterns of reuse that have previously been shown to be associated with percolation errors. Analytical software is available on request. phillip.lord@newcastle.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  19. On patterns and re-use in bioinformatics databases

    PubMed Central

    Bell, Michael J.; Lord, Phillip

    2017-01-01

    Abstract Motivation: As the quantity of data being depositing into biological databases continues to increase, it becomes ever more vital to develop methods that enable us to understand this data and ensure that the knowledge is correct. It is widely-held that data percolates between different databases, which causes particular concerns for data correctness; if this percolation occurs, incorrect data in one database may eventually affect many others while, conversely, corrections in one database may fail to percolate to others. In this paper, we test this widely-held belief by directly looking for sentence reuse both within and between databases. Further, we investigate patterns of how sentences are reused over time. Finally, we consider the limitations of this form of analysis and the implications that this may have for bioinformatics database design. Results: We show that reuse of annotation is common within many different databases, and that also there is a detectable level of reuse between databases. In addition, we show that there are patterns of reuse that have previously been shown to be associated with percolation errors. Availability and implementation: Analytical software is available on request. Contact: phillip.lord@newcastle.ac.uk PMID:28525546

  20. A positivity-preserving, implicit defect-correction multigrid method for turbulent combustion

    NASA Astrophysics Data System (ADS)

    Wasserman, M.; Mor-Yossef, Y.; Greenberg, J. B.

    2016-07-01

    A novel, robust multigrid method for the simulation of turbulent and chemically reacting flows is developed. A survey of previous attempts at implementing multigrid for the problems at hand indicated extensive use of artificial stabilization to overcome numerical instability arising from non-linearity of turbulence and chemistry model source-terms, small-scale physics of combustion, and loss of positivity. These issues are addressed in the current work. The highly stiff Reynolds-averaged Navier-Stokes (RANS) equations, coupled with turbulence and finite-rate chemical kinetics models, are integrated in time using the unconditionally positive-convergent (UPC) implicit method. The scheme is successfully extended in this work for use with chemical kinetics models, in a fully-coupled multigrid (FC-MG) framework. To tackle the degraded performance of multigrid methods for chemically reacting flows, two major modifications are introduced with respect to the basic, Full Approximation Storage (FAS) approach. First, a novel prolongation operator that is based on logarithmic variables is proposed to prevent loss of positivity due to coarse-grid corrections. Together with the extended UPC implicit scheme, the positivity-preserving prolongation operator guarantees unconditional positivity of turbulence quantities and species mass fractions throughout the multigrid cycle. Second, to improve the coarse-grid-correction obtained in localized regions of high chemical activity, a modified defect correction procedure is devised, and successfully applied for the first time to simulate turbulent, combusting flows. The proposed modifications to the standard multigrid algorithm create a well-rounded and robust numerical method that provides accelerated convergence, while unconditionally preserving the positivity of model equation variables. Numerical simulations of various flows involving premixed combustion demonstrate that the proposed MG method increases the efficiency by a factor of up to eight times with respect to an equivalent single-grid method, and by two times with respect to an artificially-stabilized MG method.

  1. The effects of quantity and depth of processing on children's time perception.

    PubMed

    Arlin, M

    1986-08-01

    Two experiments were conducted to investigate the effects of quantity and depth of processing on children's time perception. These experiments tested the appropriateness of two adult time-perception models (attentional and storage size) for younger ages. Children were given stimulus sets of equal time which varied by level of processing (deep/shallow) and quantity (list length). In the first experiment, 28 children in Grade 6 reproduced presentation times of various quantities of pictures under deep (living/nonliving categorization) or shallow (repeating label) conditions. Students also compared pairs of durations. In the second experiment, 128 children in Grades K, 2, 4, and 6 reproduced presentation times under similar conditions with three or six pictures and with deep or shallow processing requirements. Deep processing led to decreased estimation of time. Higher quantity led to increased estimation of time. Comparative judgments were influenced by quantity. The interaction between age and depth of processing was significant. Older children were more affected by depth differences than were younger children. Results were interpreted as supporting different aspects of each adult model as explanations of children's time perception. The processing effect supported the attentional model and the quantity effect supported the storage size model.

  2. Operational support for Upper Atmosphere Research Satellite (UARS) attitude sensors

    NASA Technical Reports Server (NTRS)

    Lee, M.; Garber, A.; Lambertson, M.; Raina, P.; Underwood, S.; Woodruff, C.

    1994-01-01

    The Upper Atmosphere Research Satellite (UARS) has several sensors that can provide observations for attitude determination: star trackers, Sun sensors (gimbaled as well as fixed), magnetometers, Earth sensors, and gyroscopes. The accuracy of these observations is important for mission success. Analysts on the Flight Dynamics Facility (FDF) UARS Attitude task monitor these data to evaluate the performance of the sensors taking corrective action when appropriate. Monitoring activities range from examining the data during real-time passes to constructing long-term trend plots. Increasing residuals (differences) between the observed and expected quantities is a prime indicator of sensor problems. Residual increases may be due to alignment shifts and/or degradation in sensor output. Residuals from star tracker data revealed and anomalous behavior that contributes to attitude errors. Compensating for this behavior has significantly reduced the attitude errors. This paper discusses the methods used by the FDF UARS attitude task for maintenance of the attitude sensors, including short- and long-term monitoring, trend analysis, and calibration methods, and presents the results obtained through corrective action.

  3. Atomic electron energies including relativistic effects and quantum electrodynamic corrections

    NASA Technical Reports Server (NTRS)

    Aoyagi, M.; Chen, M. H.; Crasemann, B.; Huang, K. N.; Mark, H.

    1977-01-01

    Atomic electron energies have been calculated relativistically. Hartree-Fock-Slater wave functions served as zeroth-order eigenfunctions to compute the expectation of the total Hamiltonian. A first order correction to the local approximation was thus included. Quantum-electrodynamic corrections were made. For all orbitals in all atoms with 2 less than or equal to Z less than or equal to 106, the following quantities are listed: total energies, electron kinetic energies, electron-nucleus potential energies, electron-electron potential energies consisting of electrostatic and Breit interaction (magnetic and retardation) terms, and vacuum polarization energies. These results will serve for detailed comparison of calculations based on other approaches. The magnitude of quantum electrodynamic corrections is exhibited quantitatively for each state.

  4. Measuring protein-bound glutathioine (PSSG): Critical correction for cytosolic glutathione species

    USDA-ARS?s Scientific Manuscript database

    Introduction: Protein glutathionylation is gaining recognition as an important posttranslational protein modification. The common first step in measuring protein glutathionylation is the denaturation and precipitation of protein away from soluble, millimolar quantities of glutathione (GSH) and glut...

  5. Concentration of stresses and strains in a notched cyclinder of a viscoplastic material under harmonic loading

    NASA Astrophysics Data System (ADS)

    Zhuk, Ya A.; Senchenkov, I. K.

    1999-02-01

    Certain aspects of the correct definitions of stress and strain concentration factors for elastic-viscoplastic solids under cyclic loading are discussed. Problems concerning the harmonic kinematic excitation of cylindrical specimens with a lateral V-notch are examined. The behavior of the material of a cylinder is modeled using generalized flow theory. An approximate model based on the concept of complex moduli is used for comparison. Invariant characteristics such as stress and strain intensities and maximum principal stress and strain are chosen as constitutive quantities for concentration-factor definitions. The behavior of time-varying factors is investigated. Concentration factors calculated in terms of the amplitudes of the constitutive quantities are used as representative characteristics over the cycle of vibration. The dependences of the concentration factors on the loads are also studied. The accuracy of Nueber's and Birger's formulas is evaluated. The solution of the problem in the approximate formulation agrees with its solution in the exact formulation. The possibilities of the approximate model for estimating low-cycle fatigue are evaluated.

  6. Markov state models from short non-equilibrium simulations—Analysis and correction of estimation bias

    NASA Astrophysics Data System (ADS)

    Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank

    2017-03-01

    Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.

  7. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  8. Units for quantities of dimension one

    NASA Astrophysics Data System (ADS)

    Dybkaer, René

    2004-02-01

    All quantities of dimension one are said to have the SI coherent derived unit "one" with the symbol '1'. (Single quotation marks are used here sometimes to indicate a quote, name, term or symbol; double quotation marks flag a concept when necessary.) Conventionally, the term and symbol may not be combined with the SI prefixes (except for the special terms and symbols for one and 1: radian, rad, and steradian, sr). This restriction is understandable, but leads to correct yet impractical alternatives and ISO deprecated symbols such as ppm or in some cases redundant combinations of units, such as mg/kg. "Number of entities" is dimensionally independent of the current base quantities and should take its rightful place among them. The corresponding base unit is "one". A working definition is given. Other quantities of dimension one are derived as fraction, ratio, efficiency, relative quantity, relative increment or characteristic number and may also use the unit "one", whether considered to be base or derived. The special term 'uno' and symbol 'u' in either case are proposed, allowing combination with SI prefixes.

  9. A pressure consistent bridge correction of Kovalenko-Hirata closure in Ornstein-Zernike theory for Lennard-Jones fluids by apparently adjusting sigma parameter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebato, Yuki; Miyata, Tatsuhiko, E-mail: miyata.tatsuhiko.mf@ehime-u.ac.jp

    Ornstein-Zernike (OZ) integral equation theory is known to overestimate the excess internal energy, U{sup ex}, pressure through the virial route, P{sub v}, and excess chemical potential, μ{sup ex}, for one-component Lennard-Jones (LJ) fluids under hypernetted chain (HNC) and Kovalenko-Hirata (KH) approximatons. As one of the bridge correction methods to improve the precision of these thermodynamic quantities, it was shown in our previous paper that the method to apparently adjust σ parameter in the LJ potential is effective [T. Miyata and Y. Ebato, J. Molec. Liquids. 217, 75 (2016)]. In our previous paper, we evaluated the actual variation in the σmore » parameter by using a fitting procedure to molecular dynamics (MD) results. In this article, we propose an alternative method to determine the actual variation in the σ parameter. The proposed method utilizes a condition that the virial and compressibility pressures coincide with each other. This method can correct OZ theory without a fitting procedure to MD results, and possesses characteristics of keeping a form of HNC and/or KH closure. We calculate the radial distribution function, pressure, excess internal energy, and excess chemical potential for one-component LJ fluids to check the performance of our proposed bridge function. We discuss the precision of these thermodynamic quantities by comparing with MD results. In addition, we also calculate a corrected gas-liquid coexistence curve based on a corrected KH-type closure and compare it with MD results.« less

  10. Self-force correction to geodetic spin precession in Kerr spacetime

    NASA Astrophysics Data System (ADS)

    Akcay, Sarp

    2017-08-01

    We present an expression for the gravitational self-force correction to the geodetic spin precession of a spinning compact object with small, but non-negligible mass in a bound, equatorial orbit around a Kerr black hole. We consider only conservative backreaction effects due to the mass of the compact object (m1), thus neglecting the effects of its spin s1 on its motion; i.e., we impose s1≪G m12/c and m1≪m2, where m2 is the mass parameter of the background Kerr spacetime. We encapsulate the correction to the spin precession in ψ , the ratio of the accumulated spin-precession angle to the total azimuthal angle over one radial orbit in the equatorial plane. Our formulation considers the gauge-invariant O (m1) part of the correction to ψ , denoted by Δ ψ , and is a generalization of the results of Akcay et al. [Classical Quantum Gravity 34, 084001 (2017), 10.1088/1361-6382/aa61d6] to Kerr spacetime. Additionally, we compute the zero-eccentricity limit of Δ ψ and show that this quantity differs from the circular orbit Δ ψcirc by a gauge-invariant quantity containing the gravitational self-force correction to general relativistic periapsis advance in Kerr spacetime. Our result for Δ ψ is expressed in a manner that readily accommodates numerical/analytical self-force computations, e.g., in the radiation gauge, and paves the way for the computation of a new eccentric-orbit Kerr gauge invariant beyond the generalized redshift.

  11. Twenty years of experience with particulate silicone in plastic surgery.

    PubMed

    Planas, J; del Cacho, C

    1992-01-01

    The use of particulate silicone in plastic surgery involves the introduction of solid silicone into the body. The silicone is in small pieces in order for it to adapt to the shape of the defect. This way large quantities can be introduced through small incisions. It is also possible to distribute the silicone particles from outside the skin to make the corrections more regular. This method has been very useful for correcting post-traumatic depressions in the face and all areas where the depression has a rigid back support. We consider it the treatment of choice for correcting the funnel chest deformity.

  12. Blockage corrections for three-dimensional-flow closed-throat wind tunnels, with consideration of the effect of compressibility

    NASA Technical Reports Server (NTRS)

    Herriot, John G

    1947-01-01

    Theoretical blockage corrections are presented for a body of revolution and for a three-dimensional unswept wing in a circular or rectangular wind tunnel. The theory takes account of the effects of the wake and of the compressibility of the fluid, and is based on the assumption that the dimensions of the model are small in comparison with those of the tunnel throat. Formulas are given for correcting a number of the quantities, such as dynamic pressure and Mach number, measured in wind-tunnel tests. The report presents a summary and unification of the existing literature on the subject.

  13. Refraction effects of atmosphere on geodetic measurements to celestial bodies

    NASA Technical Reports Server (NTRS)

    Joshi, C. S.

    1973-01-01

    The problem is considered of obtaining accurate values of refraction corrections for geodetic measurements of celestial bodies. The basic principles of optics governing the phenomenon of refraction are defined, and differential equations are derived for the refraction corrections. The corrections fall into two main categories: (1) refraction effects due to change in the direction of propagation, and (2) refraction effects mainly due to change in the velocity of propagation. The various assumptions made by earlier investigators are reviewed along with the basic principles of improved models designed by investigators of the twentieth century. The accuracy problem for various quantities is discussed, and the conclusions and recommendations are summarized.

  14. 49 CFR 385.17 - Change to safety rating based upon corrective actions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... in CMVs or placardable quantities of hazardous materials. (2) Within 45 days for all other motor.... [65 FR 50935, Aug. 22, 2000, as amended at 72 FR 36788, July 5, 2007; 75 FR 17241, Apr. 5, 2010; 77 FR...

  15. 49 CFR 385.17 - Change to safety rating based upon corrective actions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... in CMVs or placardable quantities of hazardous materials. (2) Within 45 days for all other motor.... [65 FR 50935, Aug. 22, 2000, as amended at 72 FR 36788, July 5, 2007; 75 FR 17241, Apr. 5, 2010; 77 FR...

  16. 7 CFR 1778.10 - Restrictions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... areas from submitting joint proposals for assistance under this part. Each entity applying for financial... Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE..., cost, and are not directly related to correcting the potable water quantity or quality problem. (4) Pay...

  17. 'Trafficking' or 'personal use': do people who regularly inject drugs understand Australian drug trafficking laws?

    PubMed

    Hughes, Caitlin E; Ritter, Alison; Cowdery, Nicholas; Sindicich, Natasha

    2014-11-01

    Legal thresholds for drug trafficking, over which possession of an illicit drug is deemed 'trafficking' as opposed to 'personal use', are employed in all Australian states and territories excepting Queensland. In this paper, we explore the extent to which people who regularly inject drugs understand such laws. Participants from the seven affected states/territories in the 2012 Illicit Drug Reporting System (n = 823) were asked about their legal knowledge of trafficking thresholds: whether, if arrested, quantity possessed would affect legal action taken; and the quantities of heroin, methamphetamine, cocaine and cannabis that would constitute an offence of supply. Data were compared against the actual laws to identify the accuracy of knowledge by drug type and state, and sociodemographics, use and purchasing patterns related to knowledge. Most Illicit Drug Reporting System participants (77%) correctly said that quantity possessed would affect charge received. However, only 55.8% nominated any specific quantity that would constitute an offence of supply, and of those 22.6% nominated a wrong quantity, namely a quantity that was larger than the actual quantity for supply (this varied by state and drug). People who regularly inject drugs have significant gaps in knowledge about Australian legal thresholds for drug trafficking, particularly regarding the actual threshold quantities. This suggests that there may be a need to improve education for this population. Necessity for accurate knowledge would also be lessened by better design of Australian drug trafficking laws. © 2014 Australasian Professional Society on Alcohol and other Drugs.

  18. How the great apes (Pan troglodytes, Pongo pygmaeus, Pan paniscus, Gorilla gorilla) perform on the reversed reward contingency task II: transfer to new quantities, long-term retention, and the impact of quantity ratios.

    PubMed

    Uher, Jana; Call, Josep

    2008-05-01

    We tested 6 chimpanzees (Pan troglodytes), 3 orangutans (Pongo pygmaeus), 4 bonobos (Pan paniscus), and 2 gorillas (Gorilla gorilla) in the reversed reward contingency task. Individuals were presented with pairs of quantities ranging between 0 and 6 food items. Prior to testing, some experienced apes had solved this task using 2 quantities while others were totally naïve. Experienced apes transferred their ability to multiple-novel pairs after 6 to 19 months had elapsed since their initial testing. Two out of 6 naïve apes (1 chimpanzee, 1 bonobo) solved the task--a proportion comparable to that of a previous study using 2 pairs of quantities. Their acquisition speed was also comparable to the successful subjects from that study. The ratio between quantities explained a large portion of the variance but affected naïve and experienced individuals differently. For smaller ratios, naïve individuals were well below 50% correct and experienced ones were well above 50%, yet both groups tended to converge toward 50% for larger ratios. Thus, some apes require no procedural modifications to overcome their strong bias for selecting the larger of 2 quantities. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  19. 27 CFR 30.1 - Gauging of distilled spirits.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... “Gauging Manual Embracing Instructions and Tables for Determining Quantity of Distilled Spirits by Proof... tables, together with their instructions, shall be used, wherever applicable, in making the necessary... distilled spirits contain dissolved solids, temperature correction of the hydrometer reading by the use of...

  20. Text Mining Metal-Organic Framework Papers.

    PubMed

    Park, Sanghoon; Kim, Baekjun; Choi, Sihoon; Boyd, Peter G; Smit, Berend; Kim, Jihan

    2018-02-26

    We have developed a simple text mining algorithm that allows us to identify surface area and pore volumes of metal-organic frameworks (MOFs) using manuscript html files as inputs. The algorithm searches for common units (e.g., m 2 /g, cm 3 /g) associated with these two quantities to facilitate the search. From the sample set data of over 200 MOFs, the algorithm managed to identify 90% and 88.8% of the correct surface area and pore volume values. Further application to a test set of randomly chosen MOF html files yielded 73.2% and 85.1% accuracies for the two respective quantities. Most of the errors stem from unorthodox sentence structures that made it difficult to identify the correct data as well as bolded notations of MOFs (e.g., 1a) that made it difficult identify its real name. These types of tools will become useful when it comes to discovering structure-property relationships among MOFs as well as collecting a large set of data for references.

  1. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  2. What reaches the antenna? How to calibrate odor flux and ligand-receptor affinities.

    PubMed

    Andersson, Martin N; Schlyter, Fredrik; Hill, Sharon Rose; Dekker, Teun

    2012-06-01

    Physiological studies on olfaction frequently ignore the airborne quantities of stimuli reaching the sensory organ. We used a gas chromatography-calibrated photoionization detector to estimate quantities released from standard Pasteur pipette stimulus cartridges during repeated puffing of 27 compounds and verified how lack of quantification could obscure olfactory sensory neuron (OSN) affinities. Chemical structure of the stimulus, solvent, dose, storage condition, puff interval, and puff number all influenced airborne quantities. A model including boiling point and lipophilicity, but excluding vapor pressure, predicted airborne quantities from stimuli in paraffin oil on filter paper. We recorded OSN responses of Drosophila melanogaster, Ips typographus, and Culex quinquefasciatus, to known quantities of airborne stimuli. These demonstrate that inferred OSN tuning width, ligand affinity, and classification can be confounded and require stimulus quantification. Additionally, proper dose-response analysis shows that Drosophila AB3A OSNs are not promiscuous, but highly specific for ethyl hexanoate, with other earlier proposed ligands 10- to 10 000-fold less potent. Finally, we reanalyzed published Drosophila OSN data (DoOR) and demonstrate substantial shifts in affinities after compensation for quantity and puff number. We conclude that consistent experimental protocols are necessary for correct OSN classification and present some simple rules that make calibration, even retroactively, readily possible.

  3. Groundwater similarity across a watershed derived from time-warped and flow-corrected time series

    NASA Astrophysics Data System (ADS)

    Rinderer, M.; McGlynn, B. L.; van Meerveld, H. J.

    2017-05-01

    Information about catchment-scale groundwater dynamics is necessary to understand how catchments store and release water and why water quantity and quality varies in streams. However, groundwater level monitoring is often restricted to a limited number of sites. Knowledge of the factors that determine similarity between monitoring sites can be used to predict catchment-scale groundwater storage and connectivity of different runoff source areas. We used distance-based and correlation-based similarity measures to quantify the spatial and temporal differences in shallow groundwater similarity for 51 monitoring sites in a Swiss prealpine catchment. The 41 months long time series were preprocessed using Dynamic Time-Warping and a Flow-corrected Time Transformation to account for small timing differences and bias toward low-flow periods. The mean distance-based groundwater similarity was correlated to topographic indices, such as upslope contributing area, topographic wetness index, and local slope. Correlation-based similarity was less related to landscape position but instead revealed differences between seasons. Analysis of variance and partial Mantel tests showed that landscape position, represented by the topographic wetness index, explained 52% of the variability in mean distance-based groundwater similarity, while spatial distance, represented by the Euclidean distance, explained only 5%. The variability in distance-based similarity and correlation-based similarity between groundwater and streamflow time series was significantly larger for midslope locations than for other landscape positions. This suggests that groundwater dynamics at these midslope sites, which are important to understand runoff source areas and hydrological connectivity at the catchment scale, are most difficult to predict.

  4. Biogeosystem Technique as a method to correct the climate

    NASA Astrophysics Data System (ADS)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    The climate change and uncertainties of biosphere are on agenda. Correction o the climate drivers will make the climate and biosphere more predictable and certain. Direct sequestration of fossil industrial hydrocarbons and natural methane excess for greenhouse effect reduction is a dangerous mistake. Most quantity of carbon now exists in the form of geological deposits and further reduction of carbon content in biosphere and atmosphere leads to degradation of life. We propose the biological management of the greenhouse gases changing the ratio of biological and atmospheric phases of carbon and water cycle. The biological correction of carbon cycle is the obvious measure because the biological alterations of the Earth's climate have ever been an important peculiarity of the Planet's history. At the first stage of the Earth's climate correction algorithm we use the few leading obvious principal as follows: The more greenhouse amount in atmosphere, the higher greenhouse effect; The more biological production of terrestrial ecosystem, the higher carbon dioxide biological sequestration from atmosphere; The more fresh ionized active oxygen biological production, the higher rate of methane and hydrogen sulfide oxidation in atmosphere, water and soil; The more quantity of carbon in the form of live biological matter in soil and above-ground biomass, the less quantity of carbon in atmosphere; The less sink of carbon to water system, the less emission of greenhouse gases from water system; The less rate of water consumption per unit of biological production, the less transpiration rate of water vapor as a greenhouse gas; The higher intra-soil utilization of mortal biomass, biological and mineral wastes into the plant nutrition instead of its mineralization to greenhouse gases, the less greenhouse effect; The more fossil industrial hydrocarbons are used, the higher can be Earth's biomass; The higher biomass on the Earth, the more of ecology safe food, raw material and biofuel can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  5. Prognostic value of efficiently correcting nocturnal desaturations after one month of non-invasive ventilation in amyotrophic lateral sclerosis: a retrospective monocentre observational cohort study.

    PubMed

    Gonzalez-Bermejo, Jésus; Morelot-Panzini, Capucine; Arnol, Nathalie; Meininger, Vincent; Kraoua, Salah; Salachas, François; Similowski, Thomas

    2013-09-01

    Abstract NIV adherence ('quantity' of ventilation) has a prognostic impact in amyotrophic lateral sclerosis (ALS). We hypothesized that NIV effectiveness ('quality') could also have a similar impact. NIV effectiveness was evaluated in 82 patients within the first month (M1) and every three months (symptoms, arterial blood bases, and nocturnal pulsed oxygen saturation - SpO2). Kaplan-Meier survival and risk factors for mortality one year after NIV initiation were evaluated. Forty patients were considered 'correctly ventilated' at M1 (Group 1, less than 5% of nocturnal oximetry time with an SpO2<90% - TS90) while 42 were not (Group 2). Both groups were comparable in terms of respiratory and neurological baseline characteristics. Survival was better in Group 1 (75% survival at 12 months) than in Group 2 (43% survival at 12 months, p = 0.002). In 12 Group 2 patients corrective measures were efficient in correcting TS90 at six months. In this subgroup, one-year mortality was not different from that in Group 1. Multivariate analysis identified independent mortality risk factors expectedly including bulbar involvement (HR = 4.31 (1.73 - 10.76), p = 0.002), 'rapid respiratory decline' (HR = 3.55 (1.29 - 9.75), p = 0.014) and vital capacity (HR = 0.97 (0.95 - 0.99), p = 0.010), but also inadequate ventilation in the first month (HR = 2.32 (1.09 - 4.94), p = 0.029). In conclusion, in ALS patients NIV effectiveness to correct nocturnal desaturations is an independent prognostic factor.

  6. Hypergol Maintenance Facility Hazardous Waste South Staging Areas, SWMU 070 Corrective Measures Implementation

    NASA Technical Reports Server (NTRS)

    Miller, Ralinda R.

    2016-01-01

    This document presents the Corrective Measures Implementation (CMI) Year 10 Annual Report for implementation of corrective measures at the Hypergol Maintenance Facility (HMF) Hazardous Waste South Staging Areas at Kennedy Space Center, Florida. The work is being performed by Tetra Tech, Inc., for the National Aeronautics and Space Administration (NASA) under Indefinite Delivery Indefinite Quantity (IDIQ) NNK12CA15B, Task Order (TO) 07. Mr. Harry Plaza, P.E., of NASA's Environmental Assurance Branch is the Remediation Project Manager for John F. Kennedy Space Center. The Tetra Tech Program Manager is Mr. Mark Speranza, P.E., and the Tetra Tech Project Manager is Robert Simcik, P.E.

  7. Thermal properties of nuclear matter in a variational framework with relativistic corrections

    NASA Astrophysics Data System (ADS)

    Zaryouni, S.; Hassani, M.; Moshfegh, H. R.

    2014-01-01

    The properties of hot symmetric nuclear matter for a wide range of densities and temperatures are investigated by employing the AV14 potential within the lowest order constrained variational (LOCV) method with the inclusion of a phenomenological three-body force as well as relativistic corrections. The relativistic corrections of many-body kinetic energies as well as the boot interaction corrections are presented for a wide range of densities and temperatures. The free energy, pressure, incompressibility, and other thermodynamic quantities of symmetric nuclear matter are obtained and discussed. The critical temperature is found, and the liquid-gas phase transition is analyzed both with and without the inclusion of three-body forces and relativistic corrections in the LOCV approach. It is shown that the critical temperature is strongly affected by the three-body forces but does not depend on the relativistic corrections. Finally, the results obtained in the present study are compared with other many-body calculations and experimental predictions.

  8. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by the light and heavy rain rates. This is useful in estimating the fraction of rainfall contributed by the rain rates that go undetected by the radar. The results at high rain rates provide a cross-check on the usual attenuation correction methods that are applied at the highest resolution of the instrument.

  9. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  10. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  11. Introducing Causality and Power into Family Therapy Theory: A Correction to the Systemic Paradigm.

    ERIC Educational Resources Information Center

    Fish, Vincent

    1990-01-01

    Proposes that concepts of causality and power are compatible with systemic paradigm based on cybernetics of Ashby rather than that of Bateson. Criticizes Bateson's repudiation of causality and power; addresses related Batesonian biases against "quantity" and "logic." Contrasts relevant aspects of Ashby's cybernetic theory with…

  12. 43 CFR 418.37 - Disincentives for lower efficiency.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... borrowed will be accounted for in the form of a deficit in Lahontan Reservoir storage. This deficit amount will be added to the actual Lahontan Reservoir storage quantity for the purpose of determining the... and other factors. This approach should allow the District to plan its operation to correct for any...

  13. 43 CFR 418.37 - Disincentives for lower efficiency.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... borrowed will be accounted for in the form of a deficit in Lahontan Reservoir storage. This deficit amount will be added to the actual Lahontan Reservoir storage quantity for the purpose of determining the... and other factors. This approach should allow the District to plan its operation to correct for any...

  14. 40 CFR 258.57 - Selection of remedy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-water quantity and quality; (iv) The potential damage to wildlife, crops, vegetation, and physical... MUNICIPAL SOLID WASTE LANDFILLS Ground-Water Monitoring and Corrective Action § 258.57 Selection of remedy... environment; (2) Attain the ground-water protection standard as specified pursuant to §§ 258.55 (h) or (i); (3...

  15. 40 CFR 258.57 - Selection of remedy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-water quantity and quality; (iv) The potential damage to wildlife, crops, vegetation, and physical... MUNICIPAL SOLID WASTE LANDFILLS Ground-Water Monitoring and Corrective Action § 258.57 Selection of remedy... environment; (2) Attain the ground-water protection standard as specified pursuant to §§ 258.55 (h) or (i); (3...

  16. 40 CFR 258.57 - Selection of remedy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-water quantity and quality; (iv) The potential damage to wildlife, crops, vegetation, and physical... MUNICIPAL SOLID WASTE LANDFILLS Ground-Water Monitoring and Corrective Action § 258.57 Selection of remedy... environment; (2) Attain the ground-water protection standard as specified pursuant to §§ 258.55 (h) or (i); (3...

  17. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  18. Effect of formulation and baking conditions on the structure and development of non-enzymatic browning in biscuit models using images.

    PubMed

    Leiva-Valenzuela, Gabriel A; Quilaqueo, Marcela; Lagos, Daniela; Estay, Danilo; Pedreschi, Franco

    2018-04-01

    The aim of this research was to determine the effect of composition (dietary fiber = DF, fat = F, and gluten = G) and baking time on the target microstructural parameters that were observed using images of potato and wheat starch biscuits. Microstructures were studied Scanning Electron Microscope (SEM). Non-enzymatic browning (NEB) was assessed using color image analysis. Texture and moisture analysis was performed to have a better understanding of the baking process. Analysis of images revealed that the starch granules retained their native form at the end of baking, suggesting their in complete gelatinization. Granules size was similar at several different baking times, with an average equivalent diameter of 9 and 27 µm for wheat and potato starch, respectively. However, samples with different levels of DF and G increased circularity during baking to more than 30%, and also increasing hardness. NEB developed during baking, with the maximum increase observed between 13 and 19 min. This was reflected in decreased luminosity (L*) values due to a decrease in moisture levels. After 19 min, luminosity did not vary significantly. The ingredients that are used, as well as their quantities, can affect sample L* values. Therefore, choosing the correct ingredients and quantities can lead to different microstructures in the biscuits, with varying amounts of NEB products.

  19. Research on models of Digital City geo-information sharing platform

    NASA Astrophysics Data System (ADS)

    Xu, Hanwei; Liu, Zhihui; Badawi, Rami; Liu, Haiwang

    2009-10-01

    The data related to Digital City has the property of large quantity, isomerous and multiple dimensions. In the original copy method of data sharing, the application departments can not solve the problem of data updating and data security in real-time. This paper firstly analyzes various patterns of sharing Digital City information and on this basis the author provides a new shared mechanism of GIS Services, with which the data producers provide Geographic Information Services to the application users through Web API, so as to the data producers and the data users can do their best respectively. Then the author takes the application system in supermarket management as an example to explain the correctness and effectiveness of the method provided in this paper.

  20. Detection and characterization of pulses in broadband seismometers

    USGS Publications Warehouse

    Wilson, David; Ringler, Adam; Hutt, Charles R.

    2017-01-01

    Pulsing - caused either by mechanical or electrical glitches, or by microtilt local to a seismometer - can significantly compromise the long‐period noise performance of broadband seismometers. High‐fidelity long‐period recordings are needed for accurate calculation of quantities such as moment tensors, fault‐slip models, and normal‐mode measurements. Such pulses have long been recognized in accelerometers, and methods have been developed to correct these acceleration steps, but considerable work remains to be done in order to detect and correct similar pulses in broadband seismic data. We present a method for detecting and characterizing the pulses using data from a range of broadband sensor types installed in the Global Seismographic Network. The technique relies on accurate instrument response removal and employs a moving‐window approach looking for acceleration baseline shifts. We find that pulses are present at varying levels in all sensor types studied. Pulse‐detection results compared with average daily station noise values are consistent with predicted noise levels of acceleration steps. This indicates that we can calculate maximum pulse amplitude allowed per time window that would be acceptable without compromising long‐period data analysis.

  1. Inference of topology and the nature of synapses, and the flow of information in neuronal networks

    NASA Astrophysics Data System (ADS)

    Borges, F. S.; Lameu, E. L.; Iarosz, K. C.; Protachevicz, P. R.; Caldas, I. L.; Viana, R. L.; Macau, E. E. N.; Batista, A. M.; Baptista, M. S.

    2018-02-01

    The characterization of neuronal connectivity is one of the most important matters in neuroscience. In this work, we show that a recently proposed informational quantity, the causal mutual information, employed with an appropriate methodology, can be used not only to correctly infer the direction of the underlying physical synapses, but also to identify their excitatory or inhibitory nature, considering easy to handle and measure bivariate time series. The success of our approach relies on a surprising property found in neuronal networks by which nonadjacent neurons do "understand" each other (positive mutual information), however, this exchange of information is not capable of causing effect (zero transfer entropy). Remarkably, inhibitory connections, responsible for enhancing synchronization, transfer more information than excitatory connections, known to enhance entropy in the network. We also demonstrate that our methodology can be used to correctly infer directionality of synapses even in the presence of dynamic and observational Gaussian noise, and is also successful in providing the effective directionality of intermodular connectivity, when only mean fields can be measured.

  2. Reply to ‘No correction for the light propagation within the cube: Comment on Relativistic theory of the falling cube gravimeter’

    NASA Astrophysics Data System (ADS)

    Ashby, Neil

    2018-06-01

    The comment (Nagornyi 2018 Metrologia) claims that, notwithstanding the conclusions stated in the paper Relativistic theory of the falling cube gravimeter (Ashby 2008 Metrologia 55 1–10), there is no need to consider the dimensions or refractive index of the cube in fitting data from falling cube absolute gravimeters; additional questions are raised about matching quartic polynomials while determining only three quantities. The comment also suggests errors were made in Ashby (2008 Metrologia 55 1–10) while implementing the fitting routines on which the conclusions were based. The main contention of the comment is shown to be invalid because retarded time was not properly used in constructing a fictitious cube position. Such a fictitious position, fixed relative to the falling cube, is derived and shown to be dependent on cube dimensions and refractive index. An example is given showing how in the present context, polynomials of fourth order can be effectively matched by determining only three quantities, and a new compact characterization of the interference signal arriving at the detector is given. Work of the U.S. government, not subject to copyright.

  3. Air pollution effect of SO2 and/or aliphatic hydrocarbons on marble statues in Archaeological Museums.

    PubMed

    Agelakopoulou, T; Metaxa, E; Karagianni, Ch-S; Roubani-Kalantzopoulou, F

    2009-09-30

    This study allowed the identification of the main physicochemical characteristics of deterioration of the materials used in the construction of Greek ancient statues in order to plan a correct methodology of restoration. The method of Reversed-Flow Inverse Gas Chromatography is appropriate to investigate the influence of air pollutants on authentic pieces from the Greek Archaeological Museum of Kavala, near Salonica. Six local physicochemical quantities which refer to the influence of one or two pollutants (synergistic effect) were determined for each system. These quantities answer the question "when, why and how materials of cultural heritage are attacked".

  4. Numerical analysis of temperature field in the high speed rotary dry-milling process

    NASA Astrophysics Data System (ADS)

    Wu, N. X.; Deng, L. J.; Liao, D. H.

    2018-01-01

    For the effect of the temperature field in the ceramic dry granulation. Based on the Euler-Euler mathematical model, at the same time, made ceramic dry granulation experiment equipment more simplify and established physical model, the temperature of the dry granulation process was simulated with the granulation time. The relationship between the granulation temperature and granulation effect in dry granulation process was analyzed, at the same time, the correctness of numerical simulation was verified by measuring the fluidity index of ceramic bodies. Numerical simulation and experimental results showed that when granulation time was 4min, 5min, 6min, maximum temperature inside the granulation chamber was: 70°C, 85°C, 95°C. And the equilibrium of the temperature in the granulation chamber was weakened, the fluidity index of the billet particles was: 56.4. 89.7. 81.6. Results of the research showed that when granulation time was 5min, the granulation effect was best. When the granulation chamber temperature was more than 85°C, the fluidity index and the effective particles quantity of the billet particles were reduced.

  5. Multi-look fusion identification: a paradigm shift from quality to quantity in data samples

    NASA Astrophysics Data System (ADS)

    Wong, S.

    2009-05-01

    A multi-look identification method known as score-level fusion is found to be capable of achieving very high identification accuracy, even when low quality target signatures are used. Analysis using measured ground vehicle radar signatures has shown that a 97% correct identification rate can be achieved using this multi-look fusion method; in contrast, only a 37% accuracy rate is obtained when single target signature input is used. The results suggest that quantity can be used to replace quality of the target data in improving identification accuracy. With the advent of sensor technology, a large amount of target signatures of marginal quality can be captured routinely. This quantity over quality approach allows maximum exploitation of the available data to improve the target identification performance and this could have the potential of being developed into a disruptive technology.

  6. Optical radiation measurements and instrumentation.

    PubMed

    Andersen, F A; Landry, R J

    1981-07-01

    Accurate measurement of optical radiation is required when sources of optical radiation are used in biological research. Such measurement of broad-band noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, quantities, units, measurement systems and instruments, and uncertainties associated with optical radiation measurements are reviewed in this paper. A conventional technique for evaluating the potential hazards associated with broad-band sources of optical radiation and a spectroradiometer developed to measure spectral quantities is described. A new prototype ultraviolet radiation hazard monitor which has recently been developed is also presented. This new instrument utilizes a spectrograph and a spectral weighting mechanical mask and provides a direct reading of the effective irradiance for wavelengths less than 315 nm.

  7. Thermodynamics of a class of regular black holes with a generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Maluf, R. V.; Neves, Juliano C. S.

    2018-05-01

    In this article, we present a study on thermodynamics of a class of regular black holes. Such a class includes Bardeen and Hayward regular black holes. We obtained thermodynamic quantities like the Hawking temperature, entropy, and heat capacity for the entire class. As part of an effort to indicate some physical observable to distinguish regular black holes from singular black holes, we suggest that regular black holes are colder than singular black holes. Besides, contrary to the Schwarzschild black hole, that class of regular black holes may be thermodynamically stable. From a generalized uncertainty principle, we also obtained the quantum-corrected thermodynamics for the studied class. Such quantum corrections provide a logarithmic term for the quantum-corrected entropy.

  8. 7 CFR 1778.7 - Project priority.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... shortage. Grants made in accordance with § 1778.11(b) of this part to assist an established water system remedy an acute shortage of quality water or correct a significant decline in the quantity or quality of... 7 Agriculture 12 2011-01-01 2011-01-01 false Project priority. 1778.7 Section 1778.7 Agriculture...

  9. 19 CFR Appendix to Part 163 - Interim (a)(1)(A) List

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... for commercial samples, tools, theatrical effects §§ 10.70, 10.71Purebred breeding certificate § 10..., merchandise (commercial product) description, quantities, values, unit price, trade terms, part, model, style... Access Program (9802/GSP/CBI) § 141.89CF 5523 Part 141Corrected Commercial Invoice 141.86 (e)Packing List...

  10. 19 CFR Appendix to Part 163 - Interim (a)(1)(A) List

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... for commercial samples, tools, theatrical effects §§ 10.70, 10.71Purebred breeding certificate § 10..., merchandise (commercial product) description, quantities, values, unit price, trade terms, part, model, style... Access Program (9802/GSP/CBI) § 141.89CF 5523 Part 141Corrected Commercial Invoice 141.86 (e)Packing List...

  11. 19 CFR Appendix to Part 163 - Interim (a)(1)(A) List

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... for commercial samples, tools, theatrical effects §§ 10.70, 10.71Purebred breeding certificate § 10..., merchandise (commercial product) description, quantities, values, unit price, trade terms, part, model, style... Access Program (9802/GSP/CBI) § 141.89CF 5523 Part 141Corrected Commercial Invoice 141.86 (e)Packing List...

  12. 19 CFR Appendix to Part 163 - Interim (a)(1)(A) List

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... for commercial samples, tools, theatrical effects §§ 10.70, 10.71Purebred breeding certificate § 10..., merchandise (commercial product) description, quantities, values, unit price, trade terms, part, model, style... Access Program (9802/GSP/CBI) § 141.89CF 5523 Part 141Corrected Commercial Invoice 141.86 (e)Packing List...

  13. 19 CFR Appendix to Part 163 - Interim (a)(1)(A) List

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... for commercial samples, tools, theatrical effects §§ 10.70, 10.71Purebred breeding certificate § 10..., merchandise (commercial product) description, quantities, values, unit price, trade terms, part, model, style... Access Program (9802/GSP/CBI) § 141.89CF 5523 Part 141Corrected Commercial Invoice 141.86 (e)Packing List...

  14. 40 CFR 98.343 - Calculating GHG emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... potential (metric tons CH4/metric ton waste) = MCF × DOC × DOCF × F × 16/12. MCF = Methane correction factor... = Methane emissions from the landfill in the reporting year (metric tons CH4). GCH 4 = Modeled methane...). Emissions = Methane emissions from the landfill in the reporting year (metric tons CH4). R = Quantity of...

  15. Graviton propagator from background-independent quantum gravity.

    PubMed

    Rovelli, Carlo

    2006-10-13

    We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.

  16. 76 FR 68368 - Airworthiness Directives; DASSAULT AVIATION Model MYSTERE-FALCON 900 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-04

    ... Mystere-Falcon 900 aeroplanes experienced fuel leakage from a defective fuel high-level sensor located in the wing front spar. Investigations revealed that the leakage was due to a defective fuel quantity sensor * * *. This condition, if not detected and corrected, could lead to an internal fuel leakage with...

  17. Statistical physics when the minimum temperature is not absolute zero

    NASA Astrophysics Data System (ADS)

    Chung, Won Sang; Hassanabadi, Hassan

    2018-04-01

    In this paper, the nonzero minimum temperature is considered based on the third law of thermodynamics and existence of the minimal momentum. From the assumption of nonzero positive minimum temperature in nature, we deform the definitions of some thermodynamical quantities and investigate nonzero minimum temperature correction to the well-known thermodynamical problems.

  18. 40 CFR 257.27 - Selection of remedy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... withdrawal rate of users; (iii) Ground-water quantity and quality; (iv) The potential damage to wildlife... Ground-Water Monitoring and Corrective Action § 257.27 Selection of remedy. (a) Based on the results of... ground-water protection standard as specified pursuant to §§ 257.25 (h) or (i); (3) Control the source(s...

  19. 40 CFR 257.27 - Selection of remedy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... withdrawal rate of users; (iii) Ground-water quantity and quality; (iv) The potential damage to wildlife... Ground-Water Monitoring and Corrective Action § 257.27 Selection of remedy. (a) Based on the results of... ground-water protection standard as specified pursuant to §§ 257.25 (h) or (i); (3) Control the source(s...

  20. 40 CFR 257.27 - Selection of remedy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... withdrawal rate of users; (iii) Ground-water quantity and quality; (iv) The potential damage to wildlife... Ground-Water Monitoring and Corrective Action § 257.27 Selection of remedy. (a) Based on the results of... ground-water protection standard as specified pursuant to §§ 257.25 (h) or (i); (3) Control the source(s...

  1. Effect of Malmquist bias on correlation studies with IRAS data base

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  2. How Accurately Does the Free Complement Wave Function of a Helium Atom Satisfy the Schroedinger Equation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2008-12-12

    The local energy defined by H{psi}/{psi} must be equal to the exact energy E at any coordinate of an atom or molecule, as long as the {psi} under consideration is exact. The discrepancy from E of this quantity is a stringent test of the accuracy of the calculated wave function. The H-square error for a normalized {psi}, defined by {sigma}{sup 2}{identical_to}<{psi}|(H-E){sup 2}|{psi}>, is also a severe test of the accuracy. Using these quantities, we have examined the accuracy of our wave function of a helium atom calculated using the free complement method that was developed to solve the Schroedinger equation.more » Together with the variational upper bound, the lower bound of the exact energy calculated using a modified Temple's formula ensured the definitely correct value of the helium fixed-nucleus ground state energy to be -2.903 724 377 034 119 598 311 159 245 194 4 a.u., which is correct to 32 digits.« less

  3. Strategic trade-offs between quality and quantity in working memory

    PubMed Central

    Fougnie, Daryl; Cormiea, Sarah M.; Kanabar, Anish; Alvarez, George A.

    2016-01-01

    Is working memory capacity determined by an immutable limit—e.g. four memory storage slots? The fact that performance is typically unaffected by task instructions has been taken as support for such structural models of memory. Here, we modified a standard working memory task to incentivize participants to remember more items. Participants were asked to remember a set of colors over a short retention interval. In one condition, participants reported a random item’s color using a color wheel. In the modified task, participants responded to all items and their response was only considered correct if all responses were on the correct half of the color wheel. We looked for a trade-off between quantity and quality—participants storing more items, but less precisely, when required to report them all. This trade-off was observed when tasks were blocked, when task-type was cued after encoding, but not when task-type was cued during the response, suggesting that task differences changed how items were actively encoded and maintained. This strategic control over the contents of working memory challenges models that assume inflexible limits on memory storage. PMID:26950383

  4. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  5. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  6. Multiparticle states in deformed special relativity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossenfelder, S.

    2007-05-15

    We investigate the properties of multiparticle states in deformed special relativity (DSR). Starting from the Lagrangian formalism with an energy dependent metric, the conserved Noether current can be derived which is additive in the usual way. The integrated Noether current had previously been discarded as a conserved quantity, because it was correctly realized that it does no longer obey the DSR transformations. We identify the reason for this mismatch in the fact that DSR depends only on the extensive quantity of total four momentum instead of the energy-momentum densities as would be appropriate for a field theory. We argue thatmore » the reason for the failure of DSR to reproduce the standard transformation behavior in the well established limits is due to the missing sensitivity to the volume inside which energy is accumulated. We show that the soccer-ball problem is absent if one formulates DSR instead for the field densities. As a consequence, estimates for predicted effects have to be corrected by many orders of magnitude. Further, we derive that the modified quantum field theory implies a locality bound.« less

  7. Correction factors for the ISO rod phantom, a cylinder phantom, and the ICRU sphere for reference beta radiation fields of the BSS 2

    NASA Astrophysics Data System (ADS)

    Behrens, R.

    2015-03-01

    The International Organization for Standardization (ISO) requires in its standard ISO 6980 that beta reference radiation fields for radiation protection be calibrated in terms of absorbed dose to tissue at a depth of 0.07 mm in a slab phantom (30 cm x 30 cm x 15 cm). However, many beta dosemeters are ring dosemeters and are, therefore, irradiated on a rod phantom (1.9 cm in diameter and 30 cm long), or they are eye dosemeters possibly irradiated on a cylinder phantom (20 cm in diameter and 20 cm high), or area dosemeters irradiated free in air with the conventional quantity value (true value) being defined in a sphere (30 cm in diameter, made of ICRU tissue (International Commission on Radiation Units and Measurements)). Therefore, the correction factors for the conventional quantity value in the rod, the cylinder, and the sphere instead of the slab (all made of ICRU tissue) were calculated for the radiation fields of 147Pm, 85Kr, 90Sr/90Y, and, 106Ru/106Rh sources of the beta secondary standard BSS 2 developed at PTB. All correction factors were calculated for 0° up to 75° (in steps of 15°) radiation incidence. The results are ready for implementation in ISO 6980-3 and have recently been (partly) implemented in the software of the BSS 2.

  8. Promoting a Sustainable Academic–Correctional Health Partnership: Lessons for Systemic Action Research

    PubMed Central

    Barta, William D.; Shelton, Deborah; Cepelak, Cheryl; Gallagher, Colleen

    2015-01-01

    In the United States, the phenomenon of mass incarceration has created a public health crisis. One strategy for addressing this crisis involves developing a correctional agency – academic institution partnership tasked with augmenting the quality and quantity of evidence-based healthcare delivered in state prisons and attracting a greater number of health professionals to the field of correctional health research. Using a Connecticut correctional agency – academic institution partnership as a case example, the present paper examines some of the key challenges encountered over the course of a 3-year capacity-building initiative. Particular attention is given to agency and institution characteristics both at the structural level and in terms of divergent stakeholder perspectives. The authors find that future partnership development work in this area will likely benefit from close attention to predictable sources of temporal variation in agency capability to advance project-related aims. PMID:26997863

  9. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  10. Radiometric performance of the Voyager cameras

    NASA Technical Reports Server (NTRS)

    Danielson, G. E.; Kupferman, P. N.; Johnson, T. V.; Soderblom, L. A.

    1981-01-01

    The Voyager Imaging Experiment provided high-quality data of Jupiter and the Galilean satellites with the two flyby trajectories in March and July of 1979. Moderately accurate radiometric measurements have been made using these data. This paper evaluates the radiometric results and describes the inflight and ground geometric and radiometric correction factors. The radiometric quantities of intensity I and geometric albedo I/F are derived, and scaling factors for each of the filters are tabulated for correcting the 'calibrated' data from the Image Processing Laboratory at JPL. In addition, the key characteristics of both Voyager I and Voyager 2 cameras are tabulated.

  11. Use of Brevibacillus choshinensis for the production of biologically active brain-derived neurotrophic factor (BDNF).

    PubMed

    Angart, Phillip A; Carlson, Rebecca J; Thorwall, Sarah; Patrick Walton, S

    2017-07-01

    Brain-derived neurotrophic factor (BDNF) is a member of the neurotrophin family critical for neuronal cell survival and differentiation, with therapeutic potential for the treatment of neurological disorders and spinal cord injuries. The production of recombinant, bioactive BDNF is not practical in most traditional microbial expression systems because of the inability of the host to correctly form the characteristic cystine-knot fold of BDNF. Here, we investigated Brevibacillus choshinensis as a suitable expression host for bioactive BDNF expression, evaluating the effects of medium type (2SY and TM), temperature (25 and 30 °C), and culture time (48-120 h). Maximal BDNF bioactivity (per unit mass) was observed in cultures grown in 2SY medium at extended times (96 h at 30 °C or >72 h at 25 °C), with resulting bioactivity comparable to that of a commercially available BDNF. For cultures grown in 2SY medium at 25 °C for 72 h, the condition that led to the greatest quantity of biologically active protein in the shortest culture time, we recovered 264 μg/L of BDNF. As with other microbial expression systems, BDNF aggregates did form in all culture conditions, indicating that while we were able to recover biologically active BDNF, further optimization of the expression system could yield still greater quantities of bioactive protein. This study provides confirmation that B. choshinensis is capable of producing biologically active BDNF and that further optimization of culture conditions could prove valuable in increasing BDNF yields.

  12. Evaluation of trace analyte identification in complex matrices by low-resolution gas chromatography--Mass spectrometry through signal simulation.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-04-01

    The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Measurement of dissolved organic matter fluorescense in aquatic environments: An interlaboratory comparison

    USGS Publications Warehouse

    Murphy, Kathleen R.; Butler, Kenna D.; Spencer, Robert G. M.; Stedmon, Colin A.; Boehme, Jennifer R.; Aiken, George R.

    2010-01-01

    The fluorescent properties of dissolved organic matter (DOM) are often studied in order to infer DOM characteristics in aquatic environments, including source, quantity, composition, and behavior. While a potentially powerful technique, a single widely implemented standard method for correcting and presenting fluorescence measurements is lacking, leading to difficulties when comparing data collected by different research groups. This paper reports on a large-scale interlaboratory comparison in which natural samples and well-characterized fluorophores were analyzed in 20 laboratories in the U.S., Europe, and Australia. Shortcomings were evident in several areas, including data quality-assurance, the accuracy of spectral correction factors used to correct EEMs, and the treatment of optically dense samples. Data corrected by participants according to individual laboratory procedures were more variable than when corrected under a standard protocol. Wavelength dependency in measurement precision and accuracy were observed within and between instruments, even in corrected data. In an effort to reduce future occurrences of similar problems, algorithms for correcting and calibrating EEMs are described in detail, and MATLAB scripts for implementing the study's protocol are provided. Combined with the recent expansion of spectral fluorescence standards, this approach will serve to increase the intercomparability of DOM fluorescence studies.

  14. Application of Shack-Hartmann wavefront sensing technology to transmissive optic metrology

    NASA Astrophysics Data System (ADS)

    Rammage, Ron R.; Neal, Daniel R.; Copland, Richard J.

    2002-11-01

    Human vision correction optics must be produced in quantity to be economical. At the same time every human eye is unique and requires a custom corrective solution. For this reason the vision industries need fast, versatile and accurate methodologies for characterizing optics for production and research. Current methods for measuring these optics generally yield a cubic spline taken from less than 10 points across the surface of the lens. As corrective optics have grown in complexity this has become inadequate. The Shack-Hartmann wavefront sensor is a device that measures phase and irradiance of light in a single snapshot using geometric properties of light. Advantages of the Shack-Hartmann sensor include small size, ruggedness, accuracy, and vibration insensitivity. This paper discusses a methodology for designing instruments based on Shack-Hartmann sensors. The method is then applied to the development of an instrument for accurate measurement of transmissive optics such as gradient bifocal spectacle lenses, progressive addition bifocal lenses, intrarocular devices, contact lenses, and human corneal tissue. In addition, this instrument may be configured to provide hundreds of points across the surface of the lens giving improved spatial resolution. Methods are explored for extending the dynamic range and accuracy to meet the expanding needs of the ophthalmic and optometric industries. Data is presented demonstrating the accuracy and repeatability of this technique for the target optics.

  15. Perioprosthetic and Implant-Supported Rehabilitation of Complex Cases: Clinical Management and Timing Strategy.

    PubMed

    Landi, Luca; Piccinelli, Stefano; Raia, Roberto; Marinotti, Fabio; Manicone, Paolo Francesco

    2016-01-01

    Treatment of complex perioprosthetic cases is one of the clinical challenges of everyday practice. Only a complete and thorough diagnostic setup may allow the clinician to formulate a realistic prognosis to select the abutments to support prosthetic rehabilitation. Clinical, radiographic, or laboratory parameters used separately are useless to correctly assign a reliable prognosis to single teeth except in the case of a clearly hopeless tooth. Therefore, it is crucial to gather the greatest quantity of data to determine the role that every single element can play in the prosthetic rehabilitation of the case. The following report deals with the management of a multidisciplinary periodontally compromised case in which a treatment strategy and chronology were designed to reach clinical predictability while reducing the duration of the therapy.

  16. Thermodynamics of higher dimensional black holes with higher order thermal fluctuations

    NASA Astrophysics Data System (ADS)

    Pourhassan, B.; Kokabi, K.; Rangyan, S.

    2017-12-01

    In this paper, we consider higher order corrections of the entropy, which coming from thermal fluctuations, and find their effect on the thermodynamics of higher dimensional charged black holes. Leading order thermal fluctuation is logarithmic term in the entropy while higher order correction is proportional to the inverse of original entropy. We calculate some thermodynamics quantities and obtain the effect of logarithmic and higher order corrections of entropy on them. Validity of the first law of thermodynamics investigated and Van der Waals equation of state of dual picture studied. We find that five-dimensional black hole behaves as Van der Waals, but higher dimensional case have not such behavior. We find that thermal fluctuations are important in stability of black hole hence affect unstable/stable black hole phase transition.

  17. Axiomatic Geometrical Optics, Abraham-Minkowski Controversy, and Photon Properties Derived Classically

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L.Y. Dodin and N.J. Fisch

    2012-06-18

    By restating geometrical optics within the eld-theoretical approach, the classical concept of a photon in arbitrary dispersive medium is introduced, and photon properties are calculated unambiguously. In particular, the canonical and kinetic momenta carried by a photon, as well as the two corresponding energy-momentum tensors of a wave, are derived straightforwardly from rst principles of Lagrangian mechanics. The Abraham-Minkowski controversy pertaining to the de nitions of these quantities is thereby resolved for linear waves of arbitrary nature, and corrections to the traditional formulas for the photon kinetic quantities are found. An application of axiomatic geometrical optics to electromagnetic waves ismore » also presented as an example.« less

  18. Seeding big sagebrush successfully on Intermountain rangelands

    Treesearch

    Susan E. Meyer; Thomas W. Warren

    2015-01-01

    Big sagebrush can be seeded successfully on climatically suitable sites in the Great Basin using the proper seeding guidelines. These guidelines include using sufficient quantities of high-quality seed of the correct subspecies and ecotype, seeding in late fall to mid-winter, making sure that the seed is not planted too deeply, and seeding into an environment...

  19. 19 CFR 10.60 - Forms of withdrawals; bond.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... under section 309 or 317, Tariff Act of 1930, as amended, the port director in his discretion may permit... withdrawal as supplies on aircraft under § 309, Tariff Act of 1930, as amended, when the supplies are to be... 309, Tariff Act of 1930, as amended, and that the value and quantity declared for them are correct. (g...

  20. 19 CFR 10.60 - Forms of withdrawals; bond.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... under section 309 or 317, Tariff Act of 1930, as amended, the port director in his discretion may permit... withdrawal as supplies on aircraft under § 309, Tariff Act of 1930, as amended, when the supplies are to be... 309, Tariff Act of 1930, as amended, and that the value and quantity declared for them are correct. (g...

  1. 19 CFR 10.60 - Forms of withdrawals; bond.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... under section 309 or 317, Tariff Act of 1930, as amended, the port director in his discretion may permit... withdrawal as supplies on aircraft under § 309, Tariff Act of 1930, as amended, when the supplies are to be... 309, Tariff Act of 1930, as amended, and that the value and quantity declared for them are correct. (g...

  2. 19 CFR 10.60 - Forms of withdrawals; bond.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... under section 309 or 317, Tariff Act of 1930, as amended, the port director in his discretion may permit... withdrawal as supplies on aircraft under § 309, Tariff Act of 1930, as amended, when the supplies are to be... 309, Tariff Act of 1930, as amended, and that the value and quantity declared for them are correct. (g...

  3. 19 CFR 10.60 - Forms of withdrawals; bond.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... under section 309 or 317, Tariff Act of 1930, as amended, the port director in his discretion may permit... withdrawal as supplies on aircraft under § 309, Tariff Act of 1930, as amended, when the supplies are to be... 309, Tariff Act of 1930, as amended, and that the value and quantity declared for them are correct. (g...

  4. Implications of Additive Manufacturing Deployed at the Tactical Edge

    DTIC Science & Technology

    2015-04-15

    Figure 9 – User Identified AM Candidate Categories .................................................................. 27 Figure 10 – User Survey AMD...reflect the results of the user survey discussed in chapter 3: demographic data, material shortage data (quantity and category ), material sourcing data...results in grammatical inconsistencies. I made minor spelling corrections, and occasionally added clarification within brackets where required to convey

  5. Maui-VIA: A User-Friendly Software for Visual Identification, Alignment, Correction, and Quantification of Gas Chromatography–Mass Spectrometry Data

    PubMed Central

    Kuich, P. Henning J. L.; Hoffmann, Nils; Kempa, Stefan

    2015-01-01

    A current bottleneck in GC–MS metabolomics is the processing of raw machine data into a final datamatrix that contains the quantities of identified metabolites in each sample. While there are many bioinformatics tools available to aid the initial steps of the process, their use requires both significant technical expertise and a subsequent manual validation of identifications and alignments if high data quality is desired. The manual validation is tedious and time consuming, becoming prohibitively so as sample numbers increase. We have, therefore, developed Maui-VIA, a solution based on a visual interface that allows experts and non-experts to simultaneously and quickly process, inspect, and correct large numbers of GC–MS samples. It allows for the visual inspection of identifications and alignments, facilitating a unique and, due to its visualization and keyboard shortcuts, very fast interaction with the data. Therefore, Maui-Via fills an important niche by (1) providing functionality that optimizes the component of data processing that is currently most labor intensive to save time and (2) lowering the threshold of expertise required to process GC–MS data. Maui-VIA projects are initiated with baseline-corrected raw data, peaklists, and a database of metabolite spectra and retention indices used for identification. It provides functionality for retention index calculation, a targeted library search, the visual annotation, alignment, correction interface, and metabolite quantification, as well as the export of the final datamatrix. The high quality of data produced by Maui-VIA is illustrated by its comparison to data attained manually by an expert using vendor software on a previously published dataset concerning the response of Chlamydomonas reinhardtii to salt stress. In conclusion, Maui-VIA provides the opportunity for fast, confident, and high-quality data processing validation of large numbers of GC–MS samples by non-experts. PMID:25654076

  6. Origin and structures of solar eruptions II: Magnetic modeling

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Cheng, Xin; Ding, MingDe

    2017-07-01

    The topology and dynamics of the three-dimensional magnetic field in the solar atmosphere govern various solar eruptive phenomena and activities, such as flares, coronal mass ejections, and filaments/prominences. We have to observe and model the vector magnetic field to understand the structures and physical mechanisms of these solar activities. Vector magnetic fields on the photosphere are routinely observed via the polarized light, and inferred with the inversion of Stokes profiles. To analyze these vector magnetic fields, we need first to remove the 180° ambiguity of the transverse components and correct the projection effect. Then, the vector magnetic field can be served as the boundary conditions for a force-free field modeling after a proper preprocessing. The photospheric velocity field can also be derived from a time sequence of vector magnetic fields. Three-dimensional magnetic field could be derived and studied with theoretical force-free field models, numerical nonlinear force-free field models, magnetohydrostatic models, and magnetohydrodynamic models. Magnetic energy can be computed with three-dimensional magnetic field models or a time series of vector magnetic field. The magnetic topology is analyzed by pinpointing the positions of magnetic null points, bald patches, and quasi-separatrix layers. As a well conserved physical quantity, magnetic helicity can be computed with various methods, such as the finite volume method, discrete flux tube method, and helicity flux integration method. This quantity serves as a promising parameter characterizing the activity level of solar active regions.

  7. Space-Time Earthquake Prediction: The Error Diagrams

    NASA Astrophysics Data System (ADS)

    Molchan, G.

    2010-08-01

    The quality of earthquake prediction is usually characterized by a two-dimensional diagram n versus τ, where n is the rate of failures-to-predict and τ is a characteristic of space-time alarm. Unlike the time prediction case, the quantity τ is not defined uniquely. We start from the case in which τ is a vector with components related to the local alarm times and find a simple structure of the space-time diagram in terms of local time diagrams. This key result is used to analyze the usual 2-d error sets { n, τ w } in which τ w is a weighted mean of the τ components and w is the weight vector. We suggest a simple algorithm to find the ( n, τ w ) representation of all random guess strategies, the set D, and prove that there exists the unique case of w when D degenerates to the diagonal n + τ w = 1. We find also a confidence zone of D on the ( n, τ w ) plane when the local target rates are known roughly. These facts are important for correct interpretation of ( n, τ w ) diagrams when we discuss the prediction capability of the data or prediction methods.

  8. Data Quality Control: Challenges, Methods, and Solutions from an Eco-Hydrologic Instrumentation Network

    NASA Astrophysics Data System (ADS)

    Eiriksson, D.; Jones, A. S.; Horsburgh, J. S.; Cox, C.; Dastrup, D.

    2017-12-01

    Over the past few decades, advances in electronic dataloggers and in situ sensor technology have revolutionized our ability to monitor air, soil, and water to address questions in the environmental sciences. The increased spatial and temporal resolution of in situ data is alluring. However, an often overlooked aspect of these advances are the challenges data managers and technicians face in performing quality control on millions of data points collected every year. While there is general agreement that high quantities of data offer little value unless the data are of high quality, it is commonly understood that despite efforts toward quality assurance, environmental data collection occasionally goes wrong. After identifying erroneous data, data managers and technicians must determine whether to flag, delete, leave unaltered, or retroactively correct suspect data. While individual instrumentation networks often develop their own QA/QC procedures, there is a scarcity of consensus and literature regarding specific solutions and methods for correcting data. This may be because back correction efforts are time consuming, so suspect data are often simply abandoned. Correction techniques are also rarely reported in the literature, likely because corrections are often performed by technicians rather than the researchers who write the scientific papers. Details of correction procedures are often glossed over as a minor component of data collection and processing. To help address this disconnect, we present case studies of quality control challenges, solutions, and lessons learned from a large scale, multi-watershed environmental observatory in Northern Utah that monitors Gradients Along Mountain to Urban Transitions (GAMUT). The GAMUT network consists of over 40 individual climate, water quality, and storm drain monitoring stations that have collected more than 200 million unique data points in four years of operation. In all of our examples, we emphasize that scientists should remain skeptical and seek independent verification of sensor data, even for sensors purchased from trusted manufacturers.

  9. Standardizing Type Ia supernovae optical brightness using near-infrared rebrightening time

    NASA Astrophysics Data System (ADS)

    Shariff, H.; Dhawan, S.; Jiao, X.; Leibundgut, B.; Trotta, R.; van Dyk, D. A.

    2016-12-01

    Accurate standardization of Type Ia supernovae (SNIa) is instrumental to the usage of SNIa as distance indicators. We analyse a homogeneous sample of 22 low-z SNIa, observed by the Carnegie Supernova Project in the optical and near-infrared (NIR). We study the time of the second peak in the J band, t2, as an alternative standardization parameter of SNIa peak optical brightness, as measured by the standard SALT2 parameter mB. We use BAHAMAS, a Bayesian hierarchical model for SNIa cosmology, to estimate the residual scatter in the Hubble diagram. We find that in the absence of a colour correction, t2 is a better standardization parameter compared to stretch: t2 has a 1σ posterior interval for the Hubble residual scatter of σΔμ = {0.250, 0.257} mag, compared to σΔμ = {0.280, 0.287} mag when stretch (x1) alone is used. We demonstrate that when employed together with a colour correction, t2 and stretch lead to similar residual scatter. Using colour, stretch and t2 jointly as standardization parameters does not result in any further reduction in scatter, suggesting that t2 carries redundant information with respect to stretch and colour. With a much larger SNIa NIR sample at higher redshift in the future, t2 could be a useful quantity to perform robustness checks of the standardization procedure.

  10. Scaling behavior of an airplane-boarding model.

    PubMed

    Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard

    2013-04-01

    An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., [proportionality]N(α) for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α=/~0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent α(eff)(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent α(eff)(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N(-1/3) for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of t(b), and γ≈1/3 for its variance.

  11. Asymptotically free theory with scale invariant thermodynamics

    NASA Astrophysics Data System (ADS)

    Ferrari, Gabriel N.; Kneur, Jean-Loïc; Pinto, Marcus Benghi; Ramos, Rudnei O.

    2017-12-01

    A recently developed variational resummation technique, incorporating renormalization group properties consistently, has been shown to solve the scale dependence problem that plagues the evaluation of thermodynamical quantities, e.g., within the framework of approximations such as in the hard-thermal-loop resummed perturbation theory. This method is used in the present work to evaluate thermodynamical quantities within the two-dimensional nonlinear sigma model, which, apart from providing a technically simpler testing ground, shares some common features with Yang-Mills theories, like asymptotic freedom, trace anomaly and the nonperturbative generation of a mass gap. The present application confirms that nonperturbative results can be readily generated solely by considering the lowest-order (quasiparticle) contribution to the thermodynamic effective potential, when this quantity is required to be renormalization group invariant. We also show that when the next-to-leading correction from the method is accounted for, the results indicate convergence, apart from optimally preserving, within the approximations here considered, the sought-after scale invariance.

  12. Apparatus and method for quantitative assay of samples of transuranic waste contained in barrels in the presence of matrix material

    DOEpatents

    Caldwell, J.T.; Herrera, G.C.; Hastings, R.D.; Shunk, E.R.; Kunz, W.E.

    1987-08-28

    Apparatus and method for performing corrections for matrix material effects on the neutron measurements generated from analysis of transuranic waste drums using the differential-dieaway technique. By measuring the absorption index and the moderator index for a particular drum, correction factors can be determined for the effects of matrix materials on the ''observed'' quantity of fissile and fertile material present therein in order to determine the actual assays thereof. A barrel flux monitor is introduced into the measurement chamber to accomplish these measurements as a new contribution to the differential-dieaway technology. 9 figs.

  13. Studies of atmospheric refraction effects on laser data

    NASA Technical Reports Server (NTRS)

    Dunn, P. J.; Pearce, W. A.; Johnson, T. S.

    1982-01-01

    The refraction effect from three perspectives was considered. An analysis of the axioms on which the accepted correction algorithms were based was the first priority. The integrity of the meteorological measurements on which the correction model is based was also considered and a large quantity of laser observations was processed in an effort to detect any serious anomalies in them. The effect of refraction errors on geodetic parameters estimated from laser data using the most recent analysis procedures was the focus of the third element of study. The results concentrate on refraction errors which were found to be critical in the eventual use of the data for measurements of crustal dynamics.

  14. Identifying Structure-Property Relationships Through DREAM.3D Representative Volume Elements and DAMASK Crystal Plasticity Simulations: An Integrated Computational Materials Engineering Approach

    NASA Astrophysics Data System (ADS)

    Diehl, Martin; Groeber, Michael; Haase, Christian; Molodov, Dmitri A.; Roters, Franz; Raabe, Dierk

    2017-05-01

    Predicting, understanding, and controlling the mechanical behavior is the most important task when designing structural materials. Modern alloy systems—in which multiple deformation mechanisms, phases, and defects are introduced to overcome the inverse strength-ductility relationship—give raise to multiple possibilities for modifying the deformation behavior, rendering traditional, exclusively experimentally-based alloy development workflows inappropriate. For fast and efficient alloy design, it is therefore desirable to predict the mechanical performance of candidate alloys by simulation studies to replace time- and resource-consuming mechanical tests. Simulation tools suitable for this task need to correctly predict the mechanical behavior in dependence of alloy composition, microstructure, texture, phase fractions, and processing history. Here, an integrated computational materials engineering approach based on the open source software packages DREAM.3D and DAMASK (Düsseldorf Advanced Materials Simulation Kit) that enables such virtual material development is presented. More specific, our approach consists of the following three steps: (1) acquire statistical quantities that describe a microstructure, (2) build a representative volume element based on these quantities employing DREAM.3D, and (3) evaluate the representative volume using a predictive crystal plasticity material model provided by DAMASK. Exemplarily, these steps are here conducted for a high-manganese steel.

  15. Impact of clinical pharmacist on cost of drug therapy in the ICU

    PubMed Central

    Aljbouri, Tareq M.; Alkhawaldeh, Mohammed S.; Abu-Rumman, Ala’a eddeen K.; Hasan, Thamer A.; Khattar, Hakeem M.; Abu-Oliem, Atallah S.

    2013-01-01

    Objective To determine whether the presence of Clinical Pharmacist affects the cost of drug therapy for patients admitted to the Intensive Care Unit (ICU) at Al-Hussein hospital at Royal Medical Services in Amman, Jordan. Method This study compares the consumed quantities of drugs over two periods of time. Each period was ten months long. In the second period there was a Clinical Pharmacist. The decrease in consumption rate of drugs is considered to be an indicator of the success of Clinical Pharmacist in the ICU, as any decrease in consumption rate reflects the correct application of Clinical Pharmacy practices. The cost of this decrease in consumption rate represents the total reduction of drug therapy cost. Results The total reduction of drug therapy cost after applying Clinical Pharmacy practices in the ICU over a period of ten months was 149946.80 JD (211574.90 USD), which represents an average saving of 35.8% when compared to the first period in this study. Conclusion The results of this study showed a significant reduction in the consumed quantities of drugs and therefore a reduction in cost of drug therapy. Such findings highlight the importance of the presence of Clinical Pharmacist in all Jordanian hospitals wards and units. PMID:24227956

  16. Production Planning and Simulation for Reverse Supply Chain

    NASA Astrophysics Data System (ADS)

    Murayama, Takeshi; Yoda, Mitsunobu; Eguchi, Toru; Oba, Fuminori

    This paper describes a production planning method for a reverse supply chain, in which a disassembly company takes reusable components from returned used products and supplies the reusable components for a product manufacturer. This method addresses the issue that the timings and quantities of returned products and reusable components obtained from them are unknown. This method first predicts the quantities of returned products and reusable components at each time period by using reliability models. Using the prediction result, the method performs production planning based on Material Requirements Planning (MRP). This method enables us to plan at each time period: the quantity of the products to be disassembled; the quantity of the reusable components to be used; and the quantity of the new components to be produced. The flow of the components and products through a forward and reverse supply chain is simulated to show the effectiveness of the method.

  17. Computing general-relativistic effects from Newtonian N-body simulations: Frame dragging in the post-Friedmann approach

    NASA Astrophysics Data System (ADS)

    Bruni, Marco; Thomas, Daniel B.; Wands, David

    2014-02-01

    We present the first calculation of an intrinsically relativistic quantity, the leading-order correction to Newtonian theory, in fully nonlinear cosmological large-scale structure studies. Traditionally, nonlinear structure formation in standard ΛCDM cosmology is studied using N-body simulations, based on Newtonian gravitational dynamics on an expanding background. When one derives the Newtonian regime in a way that is a consistent approximation to the Einstein equations, the first relativistic correction to the usual Newtonian scalar potential is a gravitomagnetic vector potential, giving rise to frame dragging. At leading order, this vector potential does not affect the matter dynamics, thus it can be computed from Newtonian N-body simulations. We explain how we compute the vector potential from simulations in ΛCDM and examine its magnitude relative to the scalar potential, finding that the power spectrum of the vector potential is of the order 10-5 times the scalar power spectrum over the range of nonlinear scales we consider. On these scales the vector potential is up to two orders of magnitudes larger than the value predicted by second-order perturbation theory extrapolated to the same scales. We also discuss some possible observable effects and future developments.

  18. Identifying the groundwater basin boundaries, using environmental isotopes: a case study

    NASA Astrophysics Data System (ADS)

    Demiroğlu, Muhterem

    2017-06-01

    Groundwater, which is renewable under current climatic conditions separately from other natural sources, in fact is a finite resource in terms of quality and fossil groundwater. Researchers have long emphasized the necessity of exploiting, operating, conserving and managing groundwater in an efficient and sustainable manner with an integrated water management approach. The management of groundwater needs reliable information about changes on groundwater quantity and quality. Environmental isotopes are the most important tools to provide this support. No matter which method we use to calculate the groundwater budget and flow equations, we need to determine boundary conditions or the physical boundaries of the domain. The Groundwater divide line or basin boundaries that separate the two adjacent basin recharge areas from each other must be drawn correctly to be successful in defining complex groundwater basin boundary conditions. Environmental isotope data, as well as other methods provide support for determining recharge areas of the aquifers, especially for karst aquifers, residence time and interconnections between aquifer systems. This study demonstrates the use of environmental isotope data to interpret and correct groundwater basin boundaries giving as an example the Yeniçıkrı basin within the main Sakarya basin.

  19. Linkages between observed, modeled Saharan dust loading and meningitis in Senegal during 2012 and 2013

    NASA Astrophysics Data System (ADS)

    Diokhane, Aminata Mbow; Jenkins, Gregory S.; Manga, Noel; Drame, Mamadou S.; Mbodji, Boubacar

    2016-04-01

    The Sahara desert transports large quantities of dust over the Sahelian region during the Northern Hemisphere winter and spring seasons (December-April). In episodic events, high dust concentrations are found at the surface, negatively impacting respiratory health. Bacterial meningitis in particular is known to affect populations that live in the Sahelian zones, which is otherwise known as the meningitis belt. During the winter and spring of 2012, suspected meningitis cases (SMCs) were with three times higher than in 2013. We show higher surface particular matter concentrations at Dakar, Senegal and elevated atmospheric dust loading in Senegal for the period of 1 January-31 May during 2012 relative to 2013. We analyze simulated particulate matter over Senegal from the Weather Research and Forecasting (WRF) model during 2012 and 2013. The results show higher simulated dust concentrations during the winter season of 2012 for Senegal. The WRF model correctly captures the large dust events from 1 January-31 March but has shown less skill during April and May for simulated dust concentrations. The results also show that the boundary conditions are the key feature for correctly simulating large dust events and initial conditions are less important.

  20. Non-contact image processing for gin trash sensors in stripper harvested cotton with burr and fine trash correction

    USDA-ARS?s Scientific Manuscript database

    This study was initiated to provide the basis for obtaining online information as to the levels of the various types of gin trash. The objective is to provide the ginner with knowledge of the quantity of the various trash components in the raw uncleaned seed cotton. This information is currently not...

  1. Drying and control of moisture content and dimensional changes

    Treesearch

    William T. Simpson

    1999-01-01

    In the living tree, wood contains large quantities of water. As green wood dries, most of the water is removed. The moisture remaining in the wood tends to come to equilibrium with the relative humidity of the surrounding air. Correct drying, handling, and storage of wood will minimize moisture content changes that might occur after drying when the wood is in service...

  2. Advance information on the supply of pulpwood in survey unit #1

    Treesearch

    L.F. Eldredge; Southern Forest Survey Staff

    1935-01-01

    This report presents information concerning the quantity of pulpwood in survey unit #1, Florida. The geographic location of this unit which includes twenty-one counties in the northeastern part of the state is shown in Figure 1. The boundary of the Ocala National Forest. The data given in this release are preliminary and are subject to correction and modification when...

  3. KC-46A Tanker Modernization (KC-46A)

    DTIC Science & Technology

    2013-12-01

    LRIP - Low Rate Initial Production $M - Millions of Dollars MILCON - Military Construction N /A - Not Applicable O&S - Operating and Support Oth - Other...Related Qty - Quantity RDT&E - Research, Development, Test, and Evaluation SAR - Selected Acquisition Report Sch - Schedule Spt - Support TBD - To Be...attributes, data correctness , data availability, and consistent data processing specified in the applicable joint and system integrated

  4. Energy Referencing in LANL HE-EOS Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leiding, Jeffery Allen; Coe, Joshua Damon

    2017-10-19

    Here, We briefly describe the choice of energy referencing in LANL's HE-EOS codes, HEOS and MAGPIE. Understanding this is essential to comparing energies produced by different EOS codes, as well as to the correct calculation of shock Hugoniots of HEs and other materials. In all equations after (3) throughout this report, all energies, enthalpies and volumes are assumed to be molar quantities.

  5. Strategic trade-offs between quantity and quality in working memory.

    PubMed

    Fougnie, Daryl; Cormiea, Sarah M; Kanabar, Anish; Alvarez, George A

    2016-08-01

    Is working memory capacity determined by an immutable limit-for example, 4 memory storage slots? The fact that performance is typically unaffected by task instructions has been taken as support for such structural models of memory. Here, we modified a standard working memory task to incentivize participants to remember more items. Participants were asked to remember a set of colors over a short retention interval. In 1 condition, participants reported a random item's color using a color wheel. In the modified task, participants responded to all items and their response was only considered correct if all responses were on the correct half of the color wheel. We looked for a trade-off between quantity and quality-participants storing more items, but less precisely, when required to report them all. This trade-off was observed when tasks were blocked and when task-type was cued after encoding, but not when task-type was cued during the response, suggesting that task differences changed how items were actively encoded and maintained. This strategic control over the contents of working memory challenges models that assume inflexible limits on memory storage. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Land Surface Albedo from MERIS Reflectances Using MODIS Directional Factors

    NASA Technical Reports Server (NTRS)

    Schaaf, Crystal L. B.; Gao, Feng; Strahler, Alan H.

    2004-01-01

    MERIS Level 2 surface reflectance products are now available to the scientific community. This paper demonstrates the production of MERIS-derived surface albedo and Nadir Bidirectional Reflectance Distribution Function (BRDF) adjusted reflectances by coupling the MERIS data with MODIS BRDF products. Initial efforts rely on the specification of surface anisotropy as provided by the global MODIS BRDF product for a first guess of the shape of the BRDF and then make use all of the coincidently available, partially atmospherically corrected, cloud cleared, MERIS observations to generate MERIS-derived BRDF and surface albedo quantities for each location. Comparisons between MODIS (aerosol-corrected) and MERIS (not-yet aerosol-corrected) surface values from April and May 2003 are also presented for case studies in Spain and California as well as preliminary comparisons with field data from the Devil's Rock Surfrad/BSRN site.

  7. Deriving Albedo from Coupled MERIS and MODIS Surface Products

    NASA Technical Reports Server (NTRS)

    Gao, Feng; Schaaf, Crystal; Jin, Yu-Fang; Lucht, Wolfgang; Strahler, Alan

    2004-01-01

    MERIS Level 2 surface reflectance products are now available to the scientific community. This paper demonstrates the production of MERIS-derived surface albedo and Nadir Bidirectional Reflectance Distribution Function (BRDF) adjusted reflectances by coupling the MERIS data with MODIS BRDF products. Initial efforts rely on the specification of surface anisotropy as provided by the global MODIS BRDF product for a first guess of the shape of the BRDF and then make use all of the coincidently available, partially atmospherically corrected, cloud cleared, MERIS observations to generate MERIS-derived BRDF and surface albedo quantities for each location. Comparisons between MODIS (aerosol-corrected) and MERIS (not-yet aerosol-corrected) surface values from April and May 2003 are also presented for case studies in Spain and California as well as preliminary comparisons with field data from the Devil's Rock Surfrad/BSRN site.

  8. Corrections to Newton’s law of gravitation - application to hybrid Bloch brane

    NASA Astrophysics Data System (ADS)

    Almeida, C. A. S.; Veras, D. F. S.; Dantas, D. M.

    2018-02-01

    We present in this work, the calculations of corrections in the Newton’s law of gravitation due to Kaluza-Klein gravitons in five-dimensional warped thick braneworld scenarios. We consider here a recently proposed model, namely, the hybrid Bloch brane. This model couples two scalar fields to gravity and is engendered from a domain wall-like defect. Also, two other models the so-called asymmetric hybrid brane and compact brane are considered. Such models are deformations of the ϕ 4 and sine-Gordon topological defects, respectively. Therefore we consider the branes engendered by such defects and we also compute the corrections in their cases. In order to attain the mass spectrum and its corresponding eigenfunctions which are the essential quantities for computing the correction to the Newtonian potential, we develop a suitable numerical technique. The calculation of slight deviations in the gravitational potential may be used as a selection tool for braneworld scenarios matching with future experimental measurements in high energy collisions

  9. Importance of interactions between food quality, quantity, and gut transit time on consumer feeding, growth, and trophic dynamics.

    PubMed

    Mitra, Aditee; Flynn, Kevin J

    2007-05-01

    Ingestion kinetics of animals are controlled by both external food availability and feedback from the quantity of material already within the gut. The latter varies with gut transit time (GTT) and digestion of the food. Ingestion, assimilation efficiency, and thus, growth dynamics are not related in a simple fashion. For the first time, the important linkage between these processes and GTT is demonstrated; this is achieved using a biomass-based, mechanistic multinutrient model fitted to experimental data for zooplankton growth dynamics when presented with food items of varying quality (stoichiometric composition) or quantity. The results show that trophic transfer dynamics will vary greatly between the extremes of feeding on low-quantity/high-quality versus high-quantity/low-quality food; these conditions are likely to occur in nature. Descriptions of consumer behavior that assume a constant relationship between the kinetics of grazing and growth irrespective of food quality and/or quantity, with little or no recognition of the combined importance of these factors on consumer behavior, may seriously misrepresent consumer activity in dynamic situations.

  10. Effect of quantity and composition of waste on the prediction of annual methane potential from landfills.

    PubMed

    Cho, Han Sang; Moon, Hee Sun; Kim, Jae Young

    2012-04-01

    A study was conducted to investigate the effect of waste composition change on the methane production in landfills. An empirical equation for the methane potential of the mixed waste is derived based on the methane potential values of individual waste components and the compositional ratio of waste components. A correction factor was introduced in the equation and was determined from the BMP and lysimeter tests. The equation and LandGEM were applied for a full size landfill and the annual methane potential was estimated. Results showed that the changes in quantity of waste affected the annual methane potential from the landfill more than the changes of waste composition. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  12. Simultaneous Gram and viability staining on activated sludge exposed to erythromycin: 3D CLSM time-lapse imaging of bacterial disintegration.

    PubMed

    Louvet, Jean-Noël; Attik, Ghania; Dumas, Dominique; Potier, Olivier; Pons, Marie-Noëlle

    2011-11-01

    The effect of erythromycin on activated sludge bacteria according to their Gram type was investigated with 3-dimensional Confocal Laser Scanning Microscopy (CLSM) time-lapse imaging. The fluorescent stains SYTOX Green and Texas Red-X conjugate of wheat germ agglutinin stained dying bacteria and Gram(+) bacteria respectively. Time-lapse imaging allowed an understanding of the staining mechanism and the measurement of the death rate. In presence of erythromycin (10mg/L), Gram(+) bacteria had a higher mortality rate than the Gram(-) bacteria. This result suggests that antibiotic in wastewater could change the activated sludge bacteria composition, according to their Gram type by selecting the bacteria which are the least sensitive to the antibiotics. However bacterial death was followed by bacterial disintegration leading to a decrease in the fluorescence. Results suggested that the viability indicators based on membrane integrity should be used with a correct sampling method, which can give the initial quantity of living bacteria. Copyright © 2011 Elsevier GmbH. All rights reserved.

  13. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  14. Linking age, survival, and transit time distributions

    NASA Astrophysics Data System (ADS)

    Calabrese, Salvatore; Porporato, Amilcare

    2015-10-01

    Although the concepts of age, survival, and transit time have been widely used in many fields, including population dynamics, chemical engineering, and hydrology, a comprehensive mathematical framework is still missing. Here we discuss several relationships among these quantities by starting from the evolution equation for the joint distribution of age and survival, from which the equations for age and survival time readily follow. It also becomes apparent how the statistical dependence between age and survival is directly related to either the age dependence of the loss function or the survival-time dependence of the input function. The solution of the joint distribution equation also allows us to obtain the relationships between the age at exit (or death) and the survival time at input (or birth), as well as to stress the symmetries of the various distributions under time reversal. The transit time is then obtained as a sum of the age and survival time, and its properties are discussed along with the general relationships between their mean values. The special case of steady state case is analyzed in detail. Some examples, inspired by hydrologic applications, are presented to illustrate the theory with the specific results. This article was corrected on 11 Nov 2015. See the end of the full text for details.

  15. Interpretation of light scattering and turbidity measurements in aggregated systems: effect of intra-cluster multiple-light scattering.

    PubMed

    Soos, Miroslav; Lattuada, Marco; Sefcik, Jan

    2009-11-12

    In this work we studied the effect of intracluster multiple-light scattering on the scattering properties of a population of fractal aggregates. To do so, experimental data of diffusion-limited aggregation for three polystyrene latexes with similar surface properties but different primary particle diameters (equal to 118, 420, and 810 nm) were obtained by static light scattering and by means of a spectrophotometer. In parallel, a population balance equation (PBE) model, which takes into account the effect of intracluster multiple-light scattering by solving the T-matrix and the mean-field version of T-matrix, was formulated and validated against time evolution of the root mean radius of gyration, , of the zero angle intensity of scattered light, I(0), and of the turbidity, tau. It was found that the mean-field version of the T-matrix theory is able to correctly predict the time evolution of all measured light scattering quantities for all sizes of primary particles without any adjustable parameter. The structure of the aggregates, characterized by fractal dimension, d(f), was independent of the primary particle size and equal to 1.7, which is in agreement with values found in literature. Since the mean-field version of the T-matrix theory used is rather complicated and requires advanced knowledge of cluster structure (i.e., the particle-particle correlation function), a simplified version of the light scattering model was proposed and tested. It was found that within the range of operating conditions investigated, the simplified version of the light scattering model was able to describe with reasonable accuracy the time evolution of all measured light scattering quantities of the cluster mass distribution (CMD) for all three sizes of primary particles and two values of the laser wavelength.

  16. Steps towards a consistent Climate Forecast System Reanalysis wave hindcast (1979-2016)

    NASA Astrophysics Data System (ADS)

    Stopa, Justin E.; Ardhuin, Fabrice; Huchet, Marion; Accensi, Mickael

    2017-04-01

    Surface gravity waves are being increasingly recognized as playing an important role within the climate system. Wave hindcasts and reanalysis products of long time series (>30 years) have been instrumental in understanding and describing the wave climate for the past several decades and have allowed a better understanding of extreme waves and inter-annual variability. Wave hindcasts have the advantage of covering the oceans in higher space-time resolution than possible with conventional observations from satellites and buoys. Wave reanalysis systems like ECWMF's ERA-Interim directly included a wave model that is coupled to the ocean and atmosphere, otherwise reanalysis wind fields are used to drive a wave model to reproduce the wave field in long time series. The ERA Interim dataset is consistent in time, but cannot adequately resolve extreme waves. On the other hand, the NCEP Climate Forecast System (CFSR) wind field better resolves the extreme wind speeds, but suffers from discontinuous features in time which are due to the quantity and quality of the remote sensing data incorporated into the product. Therefore, a consistent hindcast that resolves the extreme waves still alludes us limiting our understanding of the wave climate. In this study, we systematically correct the CFSR wind field to reproduce a homogeneous wave field in time. To verify the homogeneity of our hindcast we compute error metrics on a monthly basis using the observations from a merged altimeter wave database which has been calibrated and quality controlled from 1985-2016. Before 1985 only few wave observations exist and are limited to a select number of wave buoys mostly in the North Hemisphere. Therefore we supplement our wave observations with seismic data which responds to nonlinear wave interactions created by opposing waves with nearly equal wavenumbers. Within the CFSR wave hindcast, we find both spatial and temporal discontinuities in the error metrics. The Southern Hemisphere often has wind speed biases larger than the Northern Hemisphere and we propose a simple correction to reduce these features by applying a taper shaped by a half-Hanning window. The discontinuous features in time are corrected by scaling the entire wind field by percentages ranging typically ranging from 1-3%. Our analysis is performed on monthly time series and we expect the monthly statistics to be more adequate for climate studies.

  17. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  18. Adler Function, Bjorken Sum Rule, and the Crewther Relation to Order {alpha}{sub s}{sup 4} in a General Gauge Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baikov, P. A.; Chetyrkin, K. G.; Kuehn, J. H.

    2010-04-02

    We compute, for the first time, the order {alpha}{sub s}{sup 4} contributions to the Bjorken sum rule for polarized electron-nucleon scattering and to the (nonsinglet) Adler function for the case of a generic color gauge group. We confirm at the same order a (generalized) Crewther relation which provides a strong test of the correctness of our previously obtained results: the QCD Adler function and the five-loop {beta} function in quenched QED. In particular, the appearance of an irrational contribution proportional to {zeta}{sub 3} in the latter quantity is confirmed. We obtain the commensurate scale equation relating the effective strong couplingmore » constants as inferred from the Bjorken sum rule and from the Adler function at order {alpha}{sub s}{sup 4}.« less

  19. Influence of pre-injection control parameters on main-injection fuel quantity for an electronically controlled double-valve fuel injection system of diesel engine

    NASA Astrophysics Data System (ADS)

    Song, Enzhe; Fan, Liyun; Chen, Chao; Dong, Quan; Ma, Xiuzhen; Bai, Yun

    2013-09-01

    A simulation model of an electronically controlled two solenoid valve fuel injection system for a diesel engine is established in the AMESim environment. The accuracy of the model is validated through comparison with experimental data. The influence of pre-injection control parameters on main-injection quantity under different control modes is analyzed. In the spill control valve mode, main-injection fuel quantity decreases gradually and then reaches a stable level because of the increase in multi-injection dwell time. In the needle control valve mode, main-injection fuel quantity increases with rising multi-injection dwell time; this effect becomes more obvious at high-speed revolutions and large main-injection pulse widths. Pre-injection pulse width has no obvious influence on main-injection quantity under the two control modes; the variation in main-injection quantity is in the range of 1 mm3.

  20. Two Dimensions of Time could produce a New Supersymmetric Theory

    NASA Astrophysics Data System (ADS)

    Kriske, Richard

    2014-03-01

    In the collapse of a system into the eigenstate of an operator,a new type of time, call it ``information time,'' could be inferred. One could look at this time to evolve the quantum state as a type of ``mass.'' This would be a correction to the explaination to the existing Higgs mechanism. Likewise one could see the dual of this in the Dilation in ``clock time'' seen in Special Relativity. In other words we see a time Dilation in ``Information Time'' as being a delay in Acceleration which we call ``mass.'' The two types of Time are Duals to each other and are symmetric. The second dimension of time has been overlooked for this reason. Time Dilation is the dual to persistance of the collapse of a system. This Duality produces some interesting and measurable effects. One conclusion that one can draw from this ``Symmetry'' is that there is a non-commuting set of operators, and a particle that connects the two ``Perpendicular'' time axis. We know from classical Quantum Theory that Momentum and Position do not commute, and this is something like the Noncommuting Time Dimensions, in that Momentum has a time-like construction and Position has a Space like construction, it is something like x, and t, not Commuting. What is the Conserved Quantity between the two types of time, is it Energy?

  1. Method for correcting for isotope burn-in effects in fission neutron dosimeters

    DOEpatents

    Gold, Raymond; McElroy, William N.

    1988-01-01

    A method is described for correcting for effect of isotope burn-in in fission neutron dosimeters. Two quantities are measured in order to quantify the "burn-in" contribution, namely P.sub.Z',A', the amount of (Z', A') isotope that is burned-in, and F.sub.Z', A', the fissions per unit volume produced in the (Z', A') isotope. To measure P.sub.Z', A', two solid state track recorder fission deposits are prepared from the very same material that comprises the fission neutron dosimeter, and the mass and mass density are measured. One of these deposits is exposed along with the fission neutron dosimeter, whereas the second deposit is subsequently used for observation of background. P.sub.Z', A' is then determined by conducting a second irradiation, wherein both the irradiated and unirradiated fission deposits are used in solid state track recorder dosimeters for observation of the absolute number of fissions per unit volume. The difference between the latter determines P.sub.Z', A' since the thermal neutron cross section is known. F.sub.Z', A' is obtained by using a fission neutron dosimeter for this specific isotope, which is exposed along with the original threshold fission neutron dosimeter to experience the same neutron flux-time history at the same location. In order to determine the fissions per unit volume produced in the isotope (Z', A') as it ingrows during the irradiation, B.sub.Z', A', from these observations, the neutron field must generally be either time independent or a separable function of time t and neutron energy E.

  2. Storm/Quiet Ratio Comparisons Between TIMED/SABER NO (sup +)(v) Volume Emission Rates and Incoherent Scatter Radar Electron Densities at E-Region Altitudes

    NASA Technical Reports Server (NTRS)

    Fernandez, J. R.; Mertens, C. J.; Bilitza, D.; Xu, X.; Russell, J. M., III; Mlynczak, M. G.

    2009-01-01

    Broadband infrared limb emission at 4.3 microns is measured by the TIMED/SABER instrument. At night, these emission observations at E-region altitudes are used to derive the so called NO+(v) Volume Emission Rate (VER). NO+(v) VER can be derived by removing the background CO2(v3) 4.3 microns radiance contribution using SABER-based non-LTE radiation transfer models, and by performing a standard Abel inversion on the residual radiance. SABER observations show that NO+(v) VER is significantly enhanced during magnetic storms in accordance with increased ionization of the neutral atmosphere by auroral electron precipitation, followed by vibrational excitation of NO+ (i.e., NO+(v)) from fast exothermic ion-neutral reactions, and prompt infrared emission at 4.3 m. Due to charge neutrality, the NO+(v) VER enhancements are highly correlated with electron density enhancements, as observed for example by Incoherent Scatter Radar (ISR). In order to characterize the response of the storm-time E-region from both SABER and ISR measurements, a Storm/Quiet ratio (SQR) quantity is defined as a function of altitude. For SABER, the SQR is the ratio of the storm-to-quiet NO+(v) VER. SQR is the storm-to-quiet ratio of electron densities for ISR. In this work, we compare SABER and ISR SQR values between 100 to 120 km. Results indicate good agreement between these measurements. SQR values are intended to be used as a correction factor to be included in an empirical storm-time correction to the International Reference Ionosphere model at E-region altitudes.

  3. JPRS Report, China, Red Flag, Number 12, 16 June 1988

    DTIC Science & Technology

    1988-08-12

    national econ- omy; correctly handling the relations between stability and growth speed and results, quantity and quality, construction and... growth of overall demand and striving to increase sup- plies. Meanwhile, an effort should be directed toward developing the market and improving...Planned and Macroeconomic Management. The Most Important Thing Is t© Emancipate the Mind Further, Change the Traditional Concept of Planning, and

  4. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition

    NASA Astrophysics Data System (ADS)

    Daluge, D. R.; Ruedger, W. H.

    1981-06-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  5. Enhanced verification test suite for physics simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  6. The quantum measurement of time

    NASA Technical Reports Server (NTRS)

    Shepard, Scott R.

    1994-01-01

    Traditionally, in non-relativistic Quantum Mechanics, time is considered to be a parameter, rather than an observable quantity like space. In relativistic Quantum Field Theory, space and time are treated equally by reducing space to also be a parameter. Herein, after a brief review of other measurements, we describe a third possibility, which is to treat time as a directly observable quantity.

  7. Research on calibration method of downhole optical fiber temperature measurement and its application in SAGD well

    NASA Astrophysics Data System (ADS)

    Lu, Zhiwei; Han, Li; Hu, Chengjun; Pan, Yong; Duan, Shengnan; Wang, Ningbo; Li, Shijian; Nuer, Maimaiti

    2017-10-01

    With the development of oil and gas fields, the accuracy and quantity requirements of real-time dynamic monitoring data needed for well dynamic analysis and regulation are increasing. Permanent, distributed downhole optical fiber temperature and pressure monitoring and other online real-time continuous data monitoring has become an important data acquisition and transmission technology in digital oil field and intelligent oil field construction. Considering the requirement of dynamic analysis of steam chamber developing state in SAGD horizontal wells in F oil reservoir in Xinjiang oilfield, it is necessary to carry out real-time and continuous temperature monitoring in horizontal section. Based on the study of the principle of optical fiber temperature measurement, the factors that cause the deviation of optical fiber temperature sensing are analyzed, and the method of fiber temperature calibration is proposed to solve the problem of temperature deviation. Field application in three wells showed that it could attain accurate measurement of downhole temperature by temperature correction. The real-time and continuous downhole distributed fiber temperature sensing technology has higher application value in the reservoir management of SAGD horizontal wells. It also has a reference for similar dynamic monitoring in reservoir production.

  8. WTO — a deterministic approach to 4-fermion physics

    NASA Astrophysics Data System (ADS)

    Passarino, Giampiero

    1996-09-01

    The program WTO, which is designed for computing cross sections and other relevant observables in the e+e- annihilation into four fermions, is described. The various quantities are computed over both a completely inclusive experimental set-up and a realistic one, i.e. with cuts on the final state energies, final state angles, scattering angles and final state invariant masses. Initial state QED corrections are included by means of the structure function approach while final state QCD corrections are applicable in their naive formulation. A gauge restoring mechanism is included according to the Fermion-Loop scheme. The program structure is highly modular and particular care has been devoted to computing efficiency and speed.

  9. On the accuracy of Whitham's method. [for steady ideal gas flow past cones

    NASA Technical Reports Server (NTRS)

    Zahalak, G. I.; Myers, M. K.

    1974-01-01

    The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.

  10. Natural teeth-retained splint based on a patient-specific 3D-printed mandible used for implant surgery and vestibuloplasty

    PubMed Central

    Xing, Helin; Wu, Jinshuang; Zhou, Lei; Yang, Sefei

    2017-01-01

    Abstract Rationale: With respect to improving the quality of oral rehabilitation, the management of keratinized mucosa is as important as bone condition for implant success. To enhance this management, a natural teeth-retained splint based on a patient-specific 3-dimensional (3D) printed mandible was used in vestibuloplasty to provide sufficient keratinized mucosa around dental implants to support long-term implant maintenance. Patient concerns: A 28-year-old male patient had a fracture of the anterior andible 1 year ago, and the fracture was treated with titanium. Diagnoses: The patient had lost mandibular incisors on both the sides and had a shallow vestibule and little keratinized mucosa. Interventions: In the first-stage implant surgery, 2 implants were inserted and the titanium fracture fixation plates and screws were removed at the same time. During second-stage implant surgery, vestibuloplasty was performed, and the natural teeth-retained splint was applied. The splint was made based upon a patient-specific 3D-printed mandible. At 30-day follow-up, the splint was modified and reset. The modified splint was removed after an additional 60 days, and the patient received prosthetic treatment. Outcomes: After prosthetic treatment, successful oral rehabilitation was achieved. Within 1 year and 3 years after implant prosthesis finished, the patient exhibited a good quantity of keratinized gingiva. Lessons subsections: The proposed splint is a simple and time-effective technique for correcting soft tissue defects in implant dentistry that ensures a good quantity of keratinized mucosa. PMID:29310359

  11. Open questions and a proposal: A critical review of the evidence on infant numerical abilities

    PubMed Central

    Cantrell, Lisa; Smith, Linda B.

    2013-01-01

    Considerable research has investigated infants’ numerical capacities. Studies in this domain have used procedures of habituation, head turn, violation of expectation, reaching, and crawling to ask what quantities infants discriminate and represent visually, auditorily as well as intermodally. The concensus view from these studies is that infants possess a numerical system that is amodal and aplicable to the quantification of any kind of entity and that this system is fundamentally separate from other systems that represent continuous magnitude. Although there is much evidence consistent with this view, there are also inconsistencies in the data. This paper provides a broad review of what we know, including the evidence suggesting systematic early knowledge as well as the peculiarities and gaps in the empirical findings with respect to the concensus view. We argue, from these inconsistencies, that the concensus view cannot be entirely correct. In light of the evidence, we propose a new hypothesis, the Signal Clarity hypothesis, that posits a developmental role for dimensions of continuous quantity within the discrete quantity system and calls for a broader research agenda that considers the covariation of discrete and continuous quantities not simply as a problem for experimental control but as information that developing infants may use to build more precise and robust representations of number. PMID:23748213

  12. Neutral-atom electron binding energies from relaxed-orbital relativistic Hartree-Fock-Slater calculations for Z between 2 and 106

    NASA Technical Reports Server (NTRS)

    Huang, K.-N.; Aoyagi, M.; Mark, H.; Chen, M. H.; Crasemann, B.

    1976-01-01

    Electron binding energies in neutral atoms have been calculated relativistically, with the requirement of complete relaxation. Hartree-Fock-Slater wave functions served as zeroth-order eigenfunctions to compute the expectation of the total Hamiltonian. A first-order correction to the local approximation was thus included. Quantum-electrodynamic corrections were made. For all elements with atomic numbers ranging from 2 to 106, the following quantities are listed: total energies, electron kinetic energies, electron-nucleus potential energies, electron-electron potential energies consisting of electrostatic and Breit interaction (magnetic and retardation) terms, and vacuum polarization energies. Binding energies including relaxation are listed for all electrons in all atoms over the indicated range of atomic numbers. A self-energy correction is included for the 1s, 2s, and 2p(1/2) levels. Results for selected atoms are compared with energies calculated by other methods and with experimental values.

  13. Computation of the properties of liquid neon, methane, and gas helium at low temperature by the Feynman-Hibbs approach.

    PubMed

    Tchouar, N; Ould-Kaddour, F; Levesque, D

    2004-10-15

    The properties of liquid methane, liquid neon, and gas helium are calculated at low temperatures over a large range of pressure from the classical molecular-dynamics simulations. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach. The equations of state, diffusion, and shear viscosity coefficients are determined for neon at 45 K, helium at 80 K, and methane at 110 K. A comparison is made with the existing experimental data and for thermodynamical quantities, with results computed from quantum numerical simulations when they are available. The theoretical variation of the viscosity coefficient with pressure is in good agreement with the experimental data when the quantum corrections are taken into account, thus reducing considerably the 60% discrepancy between the simulations and experiments in the absence of these corrections.

  14. Alterations to the relativistic Love-Franey model and their application to inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeile, J.R.

    The fictitious axial-vector and tensor mesons for the real part of the relativistic Love-Franey interaction are removed. In an attempt to make up for this loss, derivative couplings are used for the {pi} and {rho} mesons. Such derivative couplings require the introduction of axial-vector and tensor contact term corrections. Meson parameters are then fit to free nucleon-nucleon scattering data. The resulting fits are comparable to those of the relativistic Love-Franey model provided that the contact term corrections are included and the fits are weighted over the physically significant quantity of twice the tensor minus the axial-vector Lorentz invariants. Failure tomore » include contact term corrections leads to poor fits at higher energies. The off-shell behavior of this model is then examined by looking at several applications from inelastic proton-nucleus scattering.« less

  15. Accuracy of Answers to Cell Lineage Questions Depends on Single-Cell Genomics Data Quality and Quantity.

    PubMed

    Spiro, Adam; Shapiro, Ehud

    2016-06-01

    Advances in single-cell (SC) genomics enable commensurate improvements in methods for uncovering lineage relations among individual cells, as determined by phylogenetic analysis of the somatic mutations harbored by each cell. Theoretically, complete and accurate knowledge of the genome of each cell of an individual can produce an extremely accurate cell lineage tree of that individual. However, the reality of SC genomics is that such complete and accurate knowledge would be wanting, in quality and in quantity, for the foreseeable future. In this paper we offer a framework for systematically exploring the feasibility of answering cell lineage questions based on SC somatic mutational analysis, as a function of SC genomics data quality and quantity. We take into consideration the current limitations of SC genomics in terms of mutation data quality, most notably amplification bias and allele dropouts (ADO), as well as cost, which puts practical limits on mutation data quantity obtained from each cell as well as on cell sample density. We do so by generating in silico cell lineage trees using a dedicated formal language, eSTG, and show how the ability to answer correctly a cell lineage question depends on the quality and quantity of the SC mutation data. The presented framework can serve as a baseline for the potential of current SC genomics to unravel cell lineage dynamics, as well as the potential contributions of future advancement, both biochemical and computational, for the task.

  16. Free-ranging dogs assess the quantity of opponents in intergroup conflicts.

    PubMed

    Bonanni, Roberto; Natoli, Eugenia; Cafazzo, Simona; Valsecchi, Paola

    2011-01-01

    In conflicts between social groups, the decision of competitors whether to attack/retreat should be based on the assessment of the quantity of individuals in their own and the opposing group. Experimental studies on numerical cognition in animals suggest that they may represent both large and small numbers as noisy mental magnitudes subject to scalar variability, and small numbers (≤4) also as discrete object-files. Consequently, discriminating between large quantities, but not between smaller ones, should become easier as the asymmetry between quantities increases. Here, we tested these hypotheses by recording naturally occurring conflicts in a population of free-ranging dogs, Canis lupus familiaris, living in a suburban environment. The overall probability of at least one pack member approaching opponents aggressively increased with a decreasing ratio of the number of rivals to that of companions. Moreover, the probability that more than half of the pack members withdrew from a conflict increased when this ratio increased. The skill of dogs in correctly assessing relative group size appeared to improve with increasing the asymmetry in size when at least one pack comprised more than four individuals, and appeared affected to a lesser extent by group size asymmetries when dogs had to compare only small numbers. These results provide the first indications that a representation of quantity based on noisy mental magnitudes may be involved in the assessment of opponents in intergroup conflicts and leave open the possibility that an additional, more precise mechanism may operate with small numbers.

  17. Quantity Time: Moving Beyond the Quality Time Myth--A Practical Guide to Spending More Time with Your Child.

    ERIC Educational Resources Information Center

    Kraehmer, Steffen T.

    Recognizing that the development of an emotional bond between children and their parents stems from the ability to express love and the willingness to share time together, this book is designed to assist parents spend quantity time with their children and establish opportunities for appreciating each other's company. The book is based on START…

  18. Metacognitive effects of initial question difficulty on subsequent memory performance.

    PubMed

    Pansky, Ainat; Goldsmith, Morris

    2014-10-01

    In two experiments, we examined whether relative retrieval fluency (the relative ease or difficulty of answering questions from memory) would be translated, via metacognitive monitoring and control processes, into an overt effect on the controlled behavior-that is, the decision whether to answer a question or abstain. Before answering a target set of multiple-choice general-knowledge questions (intermediate-difficulty questions in Exp. 1, deceptive questions in Exp. 2), the participants first answered either a set of difficult questions or a set of easy questions. For each question, they provided a forced-report answer, followed by a subjective assessment of the likelihood that their answer was correct (confidence) and by a free-report control decision-whether or not to report the answer for a potential monetary bonus (or penalty). The participants' ability to answer the target questions (forced-report proportion correct) was unaffected by the initial question difficulty. However, a predicted metacognitive contrast effect was observed: When the target questions were preceded by a set of difficult rather than easy questions, the participants were more confident in their answers to the target questions, and hence were more likely to report them, thus increasing the quantity of freely reported correct information. The option of free report was more beneficial after initial question difficulty than after initial question ease, in terms of both the gain in accuracy (Exp. 2) and a smaller cost in quantity (Exps. 1 and 2). These results demonstrate that changes in subjective experience can influence metacognitive monitoring and control, thereby affecting free-report memory performance independently of forced-report performance.

  19. Theoretical oscillation frequencies for solar-type dwarfs from stellar models with 〈3D〉-atmospheres

    NASA Astrophysics Data System (ADS)

    Jørgensen, Andreas Christ Sølvsten; Weiss, Achim; Mosumgaard, Jakob Rørsted; Silva Aguirre, Victor; Sahlholdt, Christian Lundsgaard

    2017-12-01

    We present a new method for replacing the outermost layers of stellar models with interpolated atmospheres based on results from 3D simulations, in order to correct for structural inadequacies of these layers. This replacement is known as patching. Tests, based on 3D atmospheres from three different codes and interior models with different input physics, are performed. Using solar models, we investigate how different patching criteria affect the eigenfrequencies. These criteria include the depth, at which the replacement is performed, the quantity, on which the replacement is based, and the mismatch in Teff and log g between the un-patched model and patched 3D atmosphere. We find the eigenfrequencies to be unaltered by the patching depth deep within the adiabatic region, while changing the patching quantity or the employed atmosphere grid leads to frequency shifts that may exceed 1 μHz. Likewise, the eigenfrequencies are sensitive to mismatches in Teff or log g. A thorough investigation of the accuracy of a new scheme, for interpolating mean 3D stratifications within the atmosphere grids, is furthermore performed. Throughout large parts of the atmosphere grids, our interpolation scheme yields sufficiently accurate results for the purpose of asteroseismology. We apply our procedure in asteroseismic analyses of four Kepler stars and draw the same conclusions as in the solar case: Correcting for structural deficiencies lowers the eigenfrequencies, this correction is slightly sensitive to the patching criteria, and the remaining frequency discrepancy between models and observations is less frequency dependent. Our work shows the applicability and relevance of patching in asteroseismology.

  20. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  1. Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Border, J. S.

    1988-01-01

    The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.

  2. A Study of the Quantity of Time for Teaching Reading.

    ERIC Educational Resources Information Center

    Florida Reading Association.

    A study was conducted to provide descriptive information about the quantity of classroom time used for teaching reading and the interruptive events that occur during the scheduled reading time. Data were gathered from 148 public and private school teachers representing all grade levels and a wide range of teaching experience. The subjects each…

  3. The uniform electron gas at warm dense matter conditions

    NASA Astrophysics Data System (ADS)

    Dornheim, Tobias; Groth, Simon; Bonitz, Michael

    2018-05-01

    Motivated by the current high interest in the field of warm dense matter research, in this article we review the uniform electron gas (UEG) at finite temperature and over a broad density range relevant for warm dense matter applications. We provide an exhaustive overview of different simulation techniques, focusing on recent developments in the dielectric formalism (linear response theory) and quantum Monte Carlo (QMC) methods. Our primary focus is on two novel QMC methods that have recently allowed us to achieve breakthroughs in the thermodynamics of the warm dense electron gas: Permutation blocking path integral MC (PB-PIMC) and configuration path integral MC (CPIMC). In fact, a combination of PB-PIMC and CPIMC has allowed for a highly accurate description of the warm dense UEG over a broad density-temperature range. We are able to effectively avoid the notorious fermion sign problem, without invoking uncontrolled approximations such as the fixed node approximation. Furthermore, a new finite-size correction scheme is presented that makes it possible to treat the UEG in the thermodynamic limit without loss of accuracy. In addition, we in detail discuss the construction of a parametrization of the exchange-correlation free energy, on the basis of these data - the central thermodynamic quantity that provides a complete description of the UEG and is of crucial importance as input for the simulation of real warm dense matter applications, e.g., via thermal density functional theory. A second major aspect of this review is the use of our ab initio simulation results to test previous theories, including restricted PIMC, finite-temperature Green functions, the classical mapping by Perrot and Dharma-wardana, and various dielectric methods such as the random phase approximation, or the Singwi-Tosi-Land-Sjölander (both in the static and quantum versions), Vashishta-Singwi and the recent Tanaka scheme for the local field correction. Thus, for the first time, thorough benchmarks of the accuracy of important approximation schemes regarding various quantities such as different energies, in particular the exchange-correlation free energy, and the static structure factor, are possible. In the final part of this paper, we outline a way how to rigorously extend our QMC studies to the inhomogeneous electron gas. We present first ab initio data for the static density response and for the static local field correction.

  4. Relative quantity judgments in the beluga whale (Delphinapterus leucas) and the bottlenose dolphin (Tursiops truncatus).

    PubMed

    Abramson, José Z; Hernández-Lloreda, Victoria; Call, Josep; Colmenares, Fernando

    2013-06-01

    Numerous studies have documented the ability of many species to make relative quantity judgments using an analogue magnitude system. We investigated whether one beluga whale, Delphinapterus leucas, and three bottlenose dolphins, Tursiops truncatus, were capable of selecting the larger of two sets of quantities, and analyzed if their performance matched predictions from the object file model versus the analog accumulator model. In Experiment 1, the two sets were presented simultaneously, under water, and they were visually (condition 1) or echoically (condition 2) available at the time of choice. In experiment 2, the two sets were presented above the water, successively (condition 1) or sequentially, item-by-item (condition 2), so that they were not visually available at the time of choice (condition 1) or at any time throughout the experiment (condition 2). We analyzed the effect of the ratio between quantities, the difference between quantities, and the total number of items presented on the subjects' choices. All subjects selected the larger of the two sets of quantities above chance levels in all conditions. However, unlike most previous studies, the subjects' choices did not match the predictions from the accumulator model. Whether these findings reflect interspecies differences in the mechanisms which underpin relative quantity judgments remains to be determined. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Master standard data quantity food production code. Macro elements for synthesizing production labor time.

    PubMed

    Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C

    1978-06-01

    Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.

  6. Evidence for broken Galilean invariance at the quantum spin Hall edge

    NASA Astrophysics Data System (ADS)

    Geissler, Florian; Crépin, François; Trauzettel, Björn

    2015-12-01

    We study transport properties of the helical edge channels of a quantum spin Hall insulator, in the presence of electron-electron interactions and weak, local Rashba spin-orbit coupling. The combination of the two allows for inelastic backscattering that does not break time-reversal symmetry, resulting in interaction-dependent power-law corrections to the conductance. Here, we use a nonequilibrium Keldysh formalism to describe the situation of a long, one-dimensional edge channel coupled to external reservoirs, where the applied bias is the leading energy scale. By calculating explicitly the corrections to the conductance up to fourth order of the impurity strength, we analyze correlated single- and two-particle backscattering processes on a microscopic level. Interestingly, we show that the modeling of the leads together with the breaking of Galilean invariance has important effects on the transport properties. Such breaking occurs because the Galilean invariance of the bulk spectrum transforms into an emergent Lorentz invariance of the edge spectrum. With this broken Galilean invariance at the quantum spin Hall edge, we find a contribution to single-particle backscattering with a very low power scaling, while in the presence of Galilean invariance the leading contribution will be due to correlated two-particle backscattering only. This difference is further reflected in the different values of the Fano factor of the shot noise, an experimentally observable quantity. The described behavior is specific to the Rashba scatterer and does not occur in the case of backscattering off a time-reversal-breaking, magnetic impurity.

  7. Sequential monitoring of beach litter using webcams.

    PubMed

    Kako, Shin'ichiro; Isobe, Atsuhiko; Magome, Shinya

    2010-05-01

    This study attempts to establish a system for the sequential monitoring of beach litter using webcams placed at the Ookushi beach, Goto Islands, Japan, to establish the temporal variability in the quantities of beach litter every 90 min over a one and a half year period. The time series of the quantities of beach litter, computed by counting pixels with a greater lightness than a threshold value in photographs, shows that litter does not increase monotonically on the beach, but fluctuates mainly on a monthly time scale or less. To investigate what factors influence this variability, the time derivative of the quantity of beach litter is compared with satellite-derived wind speeds. It is found that the beach litter quantities vary largely with winds, but there may be other influencing factors. (c) 2010 Elsevier Ltd. All rights reserved.

  8. Finite size effects in the thermodynamics of a free neutral scalar field

    NASA Astrophysics Data System (ADS)

    Parvan, A. S.

    2018-04-01

    The exact analytical lattice results for the partition function of the free neutral scalar field in one spatial dimension in both the configuration and the momentum space were obtained in the framework of the path integral method. The symmetric square matrices of the bilinear forms on the vector space of fields in both configuration space and momentum space were found explicitly. The exact lattice results for the partition function were generalized to the three-dimensional spatial momentum space and the main thermodynamic quantities were derived both on the lattice and in the continuum limit. The thermodynamic properties and the finite volume corrections to the thermodynamic quantities of the free real scalar field were studied. We found that on the finite lattice the exact lattice results for the free massive neutral scalar field agree with the continuum limit only in the region of small values of temperature and volume. However, at these temperatures and volumes the continuum physical quantities for both massive and massless scalar field deviate essentially from their thermodynamic limit values and recover them only at high temperatures or/and large volumes in the thermodynamic limit.

  9. HydroUnits: A Python-based Physical Units Management Tool in Hydrologic Computing Systems

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Piasecki, M.

    2015-12-01

    While one objective of data management systems is to provide the units when annotating the collected data, another is that the units must be correctly manipulated during conversion steps. This is not a trivial task however and the units conversion time and errors for large datasets can be quite expensive. To date, more than a dozen Python modules have been developed to deal with units attached to quantities. However, they fall short in many ways and also suffer from not integrating with a units controlled vocabulary. Moreover, none of them permits the encoding of some complex units defined in the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc.'s Observations Data Model (CUAHSI ODM) as a vectorial representation for storage demand reduction and does not incorporate provision to accommodate unforeseen standards-based units. We developed HydroUnits, a Python-based units management tool for three specific purposes: encoding of physical units in the Transducer Electronic Data Sheet (TEDS) as defined in the IEEE 1451.0 standard, performing dimensional analysis and on-the-fly conversion of time series allowing users to retrieve data from a data source in a desired equivalent unit while accommodating unforeseen and user-defined units. HydroUnits differentiates itself to existing tools by a number of factors including the implementation approach adopted, the adoption of standard-based units naming conventions and more importantly the emphasis on units controlled vocabularies which are a critical aspect of units treatment. Additionally, HydroUnits supports unit conversion for quantities with additive scaling factor, and natively supports time series conversion and takes leap years into consideration for units consisting of the time dimension (e.g., month, minute). Due to its overall implementation approach, HydroUnits exhibits a high level of versatility that no other tool we are aware of has achieved.

  10. Primordial black holes in linear and non-linear regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allahyari, Alireza; Abolhasani, Ali Akbar; Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir

    We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we arguemore » that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.« less

  11. Evaluation of alternative formulae for calculation of surface temperature in snowmelt models using frequency analysis of temperature observations

    Treesearch

    C. H. Luce; D. G. Tarboton

    2010-01-01

    The snow surface temperature is an important quantity in the snow energy balance, since it modulates the exchange of energy between the surface and the atmosphere as well as the conduction of energy into the snowpack. It is therefore important to correctly model snow surface temperatures in energy balance snowmelt models. This paper focuses on the relationship between...

  12. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition. [NEEDS Information Adaptive System

    NASA Technical Reports Server (NTRS)

    Daluge, D. R.; Ruedger, W. H.

    1981-01-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  13. Comment on 'Can infrared gravitons screen {lambda}?'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsamis, N. C.; Woodard, R. P.; Department of Physics, University of Florida, Gainesville, Florida 32611

    2008-07-15

    We reply to the recent criticism by Garriga and Tanaka of our proposal that quantum gravitational loop corrections may lead to a secular screening of the effective cosmological constant. Their argument rests upon a renormalization scheme in which the composite operator (R{radical}(-g)-4{lambda}{radical}(-g)){sub ren} is defined to be the trace of the renormalized field equations. Although this is a peculiar prescription, we show that it does not preclude secular screening. Moreover, we show that a constant Ricci scalar does not even classically imply a constant expansion rate. Other important points are: (1) the quantity R{sub ren} of Garriga and Tanaka ismore » neither a properly defined composite operator, nor is it constant; (2) gauge dependence does not render a Green's function devoid of physical content; (3) scalar models on a nondynamical de Sitter background (for which there is no gauge issue) can induce arbitrarily large secular contributions to the stress tensor; (4) the same secular corrections appear in observable quantities in quantum gravity; and (5) the prospects seem good for deriving a simple stochastic formulation of quantum gravity in which the leading secular effects can be summed and for which the expectation values of even complicated, gauge invariant operators can be computed at leading order.« less

  14. Explosion Hazards Associated with Spills of Large Quantities of Hazardous Materials. Phase I

    DTIC Science & Technology

    1974-10-01

    quantities of hazardous material such as liquified natural gas ( LNG ), liquified petroleum gils (LPG), or ethylene. The principal results are (1) a...associated with spills of large quantities of hazardous material such as liquified natural gas ( LNG ), liquified petroleum gas (LPG), or ethylene. The...liquified natural gas ( LNG ). Unfortunately, as the quantity of material shipped at one time increases, so does the potential hazard associated with

  15. An Improved Representation of Regional Boundaries on Parcellated Morphological Surfaces

    PubMed Central

    Hao, Xuejun; Xu, Dongrong; Bansal, Ravi; Liu, Jun; Peterson, Bradley S.

    2010-01-01

    Establishing the correspondences of brain anatomy with function is important for understanding neuroimaging data. Regional delineations on morphological surfaces define anatomical landmarks and help to visualize and interpret both functional data and morphological measures mapped onto the cortical surface. We present an efficient algorithm that accurately delineates the morphological surface of the cerebral cortex in real time during generation of the surface using information from parcellated 3D data. With this accurate region delineation, we then develop methods for boundary-preserved simplification and smoothing, as well as procedures for the automated correction of small, misclassified regions to improve the quality of the delineated surface. We demonstrate that our delineation algorithm, together with a new method for double-snapshot visualization of cortical regions, can be used to establish a clear correspondence between brain anatomy and mapped quantities, such as morphological measures, across groups of subjects. PMID:21144708

  16. Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making

    PubMed Central

    Drugowitsch, Jan; Pouget, Alexandre

    2012-01-01

    Optimal binary perceptual decision making requires accumulation of evidence in the form of a probability distribution that specifies the probability of the choices being correct given the evidence so far. Reward rates can then be maximized by stopping the accumulation when the confidence about either option reaches a threshold. Behavioral and neuronal evidence suggests that humans and animals follow such a probabilitistic decision strategy, although its neural implementation has yet to be fully characterized. Here we show that that diffusion decision models and attractor network models provide an approximation to the optimal strategy only under certain circumstances. In particular, neither model type is sufficiently flexible to encode the reliability of both the momentary and the accumulated evidence, which is a pre-requisite to accumulate evidence of time-varying reliability. Probabilistic population codes, in contrast, can encode these quantities and, as a consequence, have the potential to implement the optimal strategy accurately. PMID:22884815

  17. Predicting rates of inbreeding in populations undergoing selection.

    PubMed Central

    Woolliams, J A; Bijma, P

    2000-01-01

    Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074

  18. Multi-Objective Memetic Search for Robust Motion and Distortion Correction in Diffusion MRI.

    PubMed

    Hering, Jan; Wolf, Ivo; Maier-Hein, Klaus H

    2016-10-01

    Effective image-based artifact correction is an essential step in the analysis of diffusion MR images. Many current approaches are based on retrospective registration, which becomes challenging in the realm of high b -values and low signal-to-noise ratio, rendering the corresponding correction schemes more and more ineffective. We propose a novel registration scheme based on memetic search optimization that allows for simultaneous exploitation of different signal intensity relationships between the images, leading to more robust registration results. We demonstrate the increased robustness and efficacy of our method on simulated as well as in vivo datasets. In contrast to the state-of-art methods, the median target registration error (TRE) stayed below the voxel size even for high b -values (3000 s ·mm -2 and higher) and low SNR conditions. We also demonstrate the increased precision in diffusion-derived quantities by evaluating Neurite Orientation Dispersion and Density Imaging (NODDI) derived measures on a in vivo dataset with severe motion artifacts. These promising results will potentially inspire further studies on metaheuristic optimization in diffusion MRI artifact correction and image registration in general.

  19. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Determination of the microbolometric FPA's responsivity with imaging system's radiometric considerations

    NASA Astrophysics Data System (ADS)

    Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal

    2013-10-01

    Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.

  1. 40 CFR 423.15 - New source performance standards (NSPS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sources shall not exceed the quantity determined by multiplying the flow of low volume waste sources times... metal cleaning wastes shall not exceed the quantity determined by multiplying the flow of chemical metal... transport water shall not exceed the quantity determined by multiplying the flow of the bottom ash transport...

  2. Performance evaluation of the Abbott RealTime HCV Genotype II for hepatitis C virus genotyping.

    PubMed

    Sohn, Yong-Hak; Ko, Sun-Young; Kim, Myeong Hee; Oh, Heung-Bum

    2010-04-01

    The Abbott RealTime hepatitis C virus (HCV) Genotype II (Abbott Molecular Inc.) for HCV genotyping, which uses real-time PCR technology, has recently been developed. Accuracy and sensitivity of detection were assessed using the HCV RNA PHW202 performance panel (SeraCare Life Sciences). Consistency with restriction fragment mass polymorphism (RFMP) data, cross-reactivity with other viruses, and the ability to detect minor strains in mixtures of genotypes 1 and 2 were evaluated using clinical samples. All performance panel viruses were correctly genotyped at levels of >500 IU/mL. Results were 100% concordant with RFMP genotypic data (66/66). However, 5% (3/66) of the samples examined displayed probable genotypic cross reactivity. No cross reactivity with other viruses was evident. Minor strains in the mixtures were not effectively distinguished, even at quantities higher than the detection limit. The Abbott RealTime HCV Genotype II assay was very accurate and yielded results consistent with RFMP data. Although the assay has the advantages of automation and short turnaround time, we suggest that further improvements are necessary before it is used routinely in clinical practice. Efforts are needed to decrease cross reactivity among genotypes and to improve the ability to detect minor genotypes in mixed infections.

  3. Pressure-Volume-Temperature (PVT) Gauging of an Isothermal Cryogenic Propellant Tank Pressurized with Gaseous Helium

    NASA Technical Reports Server (NTRS)

    VanDresar, Neil T.; Zimmerli, Gregory A.

    2014-01-01

    Results are presented for pressure-volume-temperature (PVT) gauging of a liquid oxygen/liquid nitrogen tank pressurized with gaseous helium that was supplied by a high-pressure cryogenic tank simulating a cold helium supply bottle on a spacecraft. The fluid inside the test tank was kept isothermal by frequent operation of a liquid circulation pump and spray system, and the propellant tank was suspended from load cells to obtain a high-accuracy reference standard for the gauging measurements. Liquid quantity gauging errors of less than 2 percent of the tank volume were obtained when quasi-steady-state conditions existed in the propellant and helium supply tanks. Accurate gauging required careful attention to, and corrections for, second-order effects of helium solubility in the liquid propellant plus differences in the propellant/helium composition and temperature in the various plumbing lines attached to the tanks. On the basis of results from a helium solubility test, a model was developed to predict the amount of helium dissolved in the liquid as a function of cumulative pump operation time. Use of this model allowed correction of the basic PVT gauging calculations and attainment of the reported gauging accuracy. This helium solubility model is system specific, but it may be adaptable to other hardware systems.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endres, Michael G.; Shindler, Andrea; Tiburzi, Brian C.

    The commonly adopted approach for including electromagnetic interactions in lattice QCD simulations relies on using finite volume as the infrared regularization for QED. The long-range nature of the electromagnetic interaction, however, implies that physical quantities are susceptible to power-law finite volume corrections, which must be removed by performing costly simulations at multiple lattice volumes, followed by an extrapolation to the infinite volume limit. In this work, we introduce a photon mass as an alternative means for gaining control over infrared effects associated with electromagnetic interactions. We present findings for hadron mass shifts due to electromagnetic interactions (i.e., for the proton,more » neutron, charged and neutral kaon) and corresponding mass splittings, and compare the results with those obtained from conventional QCD+QED calculations. Results are reported for numerical studies of three flavor electroquenched QCD using ensembles corresponding to 800 MeV pions, ensuring that the only appreciable volume corrections arise from QED effects. The calculations are performed with three lattice volumes with spatial extents ranging from 3.4 - 6.7 fm. As a result, we find that for equal computing time (not including the generation of the lattice configurations), the electromagnetic mass shifts can be extracted from computations on a single (our smallest) lattice volume with comparable or better precision than the conventional approach.« less

  5. Massive photons: An infrared regularization scheme for lattice QCD + QED

    DOE PAGES

    Endres, Michael G.; Shindler, Andrea; Tiburzi, Brian C.; ...

    2016-08-10

    The commonly adopted approach for including electromagnetic interactions in lattice QCD simulations relies on using finite volume as the infrared regularization for QED. The long-range nature of the electromagnetic interaction, however, implies that physical quantities are susceptible to power-law finite volume corrections, which must be removed by performing costly simulations at multiple lattice volumes, followed by an extrapolation to the infinite volume limit. In this work, we introduce a photon mass as an alternative means for gaining control over infrared effects associated with electromagnetic interactions. We present findings for hadron mass shifts due to electromagnetic interactions (i.e., for the proton,more » neutron, charged and neutral kaon) and corresponding mass splittings, and compare the results with those obtained from conventional QCD+QED calculations. Results are reported for numerical studies of three flavor electroquenched QCD using ensembles corresponding to 800 MeV pions, ensuring that the only appreciable volume corrections arise from QED effects. The calculations are performed with three lattice volumes with spatial extents ranging from 3.4 - 6.7 fm. As a result, we find that for equal computing time (not including the generation of the lattice configurations), the electromagnetic mass shifts can be extracted from computations on a single (our smallest) lattice volume with comparable or better precision than the conventional approach.« less

  6. How often should we expect to be wrong? Statistical power, P values, and the expected prevalence of false discoveries.

    PubMed

    Marino, Michael J

    2018-05-01

    There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Current status of validating operational model forecasts at the DWD site Lindenberg

    NASA Astrophysics Data System (ADS)

    Beyrich, F.; Heret, C.; Vogel, G.

    2009-09-01

    Based on long experience in the measurement of atmospheric boundary layer parameters, the Meteorological Observatory Lindenberg / Richard - Aßmann-Observatory is well qualified to validate operational NWP results for this location. The validation activities cover a large range of time periods from single days or months up to several years and include much more quantities than generally used in areal verification techniques. They mainly focus on land surface and boundary layer processes which play an important role in the atmospheric forc-ing from the surface. Versatility and continuity of the database enable a comprehensive evaluation of the model behaviour under different meteorological conditions in order to esti-mate the accuracy of the physical parameterisations and to detect possible deficiencies in the predicted processes. The measurements from the boundary layer field site Falkenberg serve as reference data for various types of validation studies: 1. The operational boundary-layer measurements are used to identify and to document weather situations with large forecast errors which can then be analysed in more de-tail. Results from a case study will be presented where model deficiencies in the cor-rect simulation of the diurnal evolution of near-surface temperature under winter con-ditions over a closed snow cover where diagnosed. 2. Due to the synopsis of the boundary layer quantities based on monthly averaged di-urnal cycles systematic model deficiencies can be detected more clearly. Some dis-tinctive features found in the annual cycle (e.g. near-surface temperatures, turbulent heat fluxes and soil moisture) will be outlined. Further aspects are their different ap-pearance in the COSMO-EU and COSMO-DE models as well as the effects of start-ing time (00 or 12 UTC) on the prediction accuracy. 3. The evaluation of the model behaviour over several years provides additional insight into the impact of changes in the physical parameterisations, data assimilation or nu-merics on the meteorological quantities. The temporal development of the error char-acteristics of some near-surface weather parameters (temperature, dewpoint tem-perature, wind velocity) and of the energy fluxes at the surface will be discussed.

  8. Correcting For Seed-Particle Lag In LV Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.

    1994-01-01

    Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.

  9. Quintessence background for 5D Einstein-Gauss-Bonnet black holes

    NASA Astrophysics Data System (ADS)

    Ghosh, Sushant G.; Amir, Muhammed; Maharaj, Sunil D.

    2017-08-01

    As we know that the Lovelock theory is an extension of the general relativity to the higher-dimensions, in this theory the first- and the second-order terms correspond to general relativity and the Einstein-Gauss-Bonnet gravity, respectively. We obtain a 5D black hole solution in Einstein-Gauss-Bonnet gravity surrounded by the quintessence matter, and we also analyze their thermodynamical properties. Owing to the quintessence corrected black hole, the thermodynamic quantities have also been corrected except for the black hole entropy, and a phase transition is achievable. The phase transition for the thermodynamic stability is characterized by a discontinuity in the specific heat at r=r_C, with the stable (unstable) branch for r < (>) r_C.

  10. Conceptual Model of Quantities, Units, Dimensions, and Values

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas F.; DeKoenig, Hans-Peter; Burkhart, Roger; Espinoza, Huascar

    2011-01-01

    JPL collaborated with experts from industry and other organizations to develop a conceptual model of quantities, units, dimensions, and values based on the current work of the ISO 80000 committee revising the International System of Units & Quantities based on the International Vocabulary of Metrology (VIM). By providing support for ISO 80000 in SysML via the International Vocabulary of Metrology (VIM), this conceptual model provides, for the first time, a standard-based approach for addressing issues of unit coherence and dimensional analysis into the practice of systems engineering with SysML-based tools. This conceptual model provides support for two kinds of analyses specified in the International Vocabulary of Metrology (VIM): coherence of units as well as of systems of units, and dimension analysis of systems of quantities. To provide a solid and stable foundation, the model for defining quantities, units, dimensions, and values in SysML is explicitly based on the concepts defined in VIM. At the same time, the model library is designed in such a way that extensions to the ISQ (International System of Quantities) and SI Units (Systeme International d Unites) can be represented, as well as any alternative systems of quantities and units. The model library can be used to support SysML user models in various ways. A simple approach is to define and document libraries of reusable systems of units and quantities for reuse across multiple projects, and to link units and quantity kinds from these libraries to Unit and QuantityKind stereotypes defined in SysML user models.

  11. 77 FR 39774 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... orders; Chapter VI, Section 1(e)(3) to provide that Minimum Quantity Orders are treated as having a time... Intermarket Sweep Orders (``ISOs'') may have any time-in-force designation except WAIT; Chapter VI, Section 2... Chapter VI, Section 1(e)(3), to provide that Minimum Quantity Orders are treated as having a time-in...

  12. Noisy coupled logistic maps in the vicinity of chaos threshold.

    PubMed

    Tirnakli, Ugur; Tsallis, Constantino

    2016-04-01

    We focus on a linear chain of N first-neighbor-coupled logistic maps in the vicinity of their edge of chaos in the presence of a common noise. This model, characterised by the coupling strength ϵ and the noise width σmax, was recently introduced by Pluchino et al. [Phys. Rev. E 87, 022910 (2013)]. They detected, for the time averaged returns with characteristic return time τ, possible connections with q-Gaussians, the distributions which optimise, under appropriate constraints, the nonadditive entropy, Sq, basis of nonextensive statistics mechanics. Here, we take a closer look on this model, and numerically obtain probability distributions which exhibit a slight asymmetry for some parameter values, in variance with simple q-Gaussians. Nevertheless, along many decades, the fitting with q-Gaussians turns out to be numerically very satisfactory for wide regions of the parameter values, and we illustrate how the index q evolves with (N,τ,ϵ,σmax). It is nevertheless instructive on how careful one must be in such numerical analysis. The overall work shows that physical and/or biological systems that are correctly mimicked by this model are thermostatistically related to nonextensive statistical mechanics when time-averaged relevant quantities are studied.

  13. Noisy coupled logistic maps in the vicinity of chaos threshold

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Tsallis, Constantino

    2016-04-01

    We focus on a linear chain of N first-neighbor-coupled logistic maps in the vicinity of their edge of chaos in the presence of a common noise. This model, characterised by the coupling strength ɛ and the noise width σmax, was recently introduced by Pluchino et al. [Phys. Rev. E 87, 022910 (2013)]. They detected, for the time averaged returns with characteristic return time τ, possible connections with q-Gaussians, the distributions which optimise, under appropriate constraints, the nonadditive entropy, Sq, basis of nonextensive statistics mechanics. Here, we take a closer look on this model, and numerically obtain probability distributions which exhibit a slight asymmetry for some parameter values, in variance with simple q-Gaussians. Nevertheless, along many decades, the fitting with q-Gaussians turns out to be numerically very satisfactory for wide regions of the parameter values, and we illustrate how the index q evolves with ( N , τ , ɛ , σ m a x ) . It is nevertheless instructive on how careful one must be in such numerical analysis. The overall work shows that physical and/or biological systems that are correctly mimicked by this model are thermostatistically related to nonextensive statistical mechanics when time-averaged relevant quantities are studied.

  14. Vapor compression distillation module

    NASA Technical Reports Server (NTRS)

    Nuccio, P. P.

    1975-01-01

    A Vapor Compression Distillation (VCD) module was developed and evaluated as part of a Space Station Prototype (SSP) environmental control and life support system. The VCD module includes the waste tankage, pumps, post-treatment cells, automatic controls and fault detection instrumentation. Development problems were encountered with two components: the liquid pumps, and the waste tank and quantity gauge. Peristaltic pumps were selected instead of gear pumps, and a sub-program of materials and design optimization was undertaken leading to a projected life greater than 10,000 hours of continuous operation. A bladder tank was designed and built to contain the waste liquids and deliver it to the processor. A detrimental pressure pattern imposed upon the bladder by a force-operated quantity gauge was corrected by rearranging the force application, and design goals were achieved. System testing has demonstrated that all performance goals have been fulfilled.

  15. Fluctuations in Student Understanding of Newton's 3rd Law

    NASA Astrophysics Data System (ADS)

    Clark, Jessica W.; Sayre, Eleanor C.; Franklin, Scott V.

    2010-10-01

    We present data from a between-student study on student response to questions on Newton's Third Law given throughout the academic year. The study, conducted at Rochester Institute of Technology, involved students from the first and third of a three-quarter sequence. Construction of a response curve reveals subtle dynamics in student learning not captured by simple pre/post testing. We find a a significant positive effect from direct instruction, peaking at the end of instruction on forces, that diminishes by the end of the quarter. Two quarters later, in physics III, a significant dip in correct response occurs when instruction changes from the vector quantities of electric forces and fields to the scalar quantity of electric potential. Student response rebounds to its initial values, however, once instruction returns to the vector-based topics involving magnetic fields.

  16. Alternative method of removing otoliths from sturgeon

    USGS Publications Warehouse

    Chalupnicki, Marc A.; Dittman, Dawn E.

    2016-01-01

    Extracting the otoliths (ear bones) from fish that have very thick skulls can be difficult and very time consuming. The common practice of making a transverse vertical incision on the top of the skull with a hand or electrical saw may damage the otolith if not performed correctly. Sturgeons (Acipenseridae) are one family in particular that have a very large and thick skull. A new laboratory method entering the brain cavity from the ventral side of the fish to expose the otoliths was easier than other otolith extraction methods found in the literature. Methods reviewed in the literature are designed for the field and are more efficient at processing large quantities of fish quickly. However, this new technique was designed to be more suited for a laboratory setting when time is not pressing and successful extraction from each specimen is critical. The success of finding and removing otoliths using this technique is very high and does not compromise the structure in any manner. This alternative technique is applicable to other similar fish species for extracting the otoliths.

  17. Comparison of infrared and Raman wave numbers of neat molecular liquids: Which is the correct infrared wave number to use?

    NASA Astrophysics Data System (ADS)

    Bertie, John E.; Michaelian, Kirk H.

    1998-10-01

    This paper is concerned with the peak wave number of very strong absorption bands in infrared spectra of molecular liquids. It is well known that the peak wave number can differ depending on how the spectrum is measured. It can be different, for example, in a transmission spectrum and in an attenuated total reflection spectrum. This difference can be removed by transforming both spectra to the real, n, and imaginary, k, refractive index spectra, because both spectra yield the same k spectrum. However, the n and k spectra can be transformed to spectra of any other intensity quantity, and the peak wave numbers of strong bands may differ by up to 6 cm-1 in the spectra of the different quantities. The question which then arises is "which infrared peak wave number is the correct one to use in the comparison of infrared wave numbers of molecular liquids with wave numbers in other spectra?" For example, infrared wave numbers in the gas and liquid phase are compared to observe differences between the two phases. Of equal importance, the wave numbers of peaks in infrared and Raman spectra of liquids are compared to determine whether the infrared-active and Raman-active vibrations coincide, and thus are likely to be the same, or are distinct. This question is explored in this paper by presenting the experimental facts for different intensity quantities. The intensity quantities described are macroscopic properties of the liquid, specifically the absorbance, attenuated total reflectance, imaginary refractive index, k, imaginary dielectric constant, ɛ″, and molar absorption coefficient, Em, and one microscopic property of a molecule in the liquid, specifically the imaginary molar polarizability, αm″, which is calculated under the approximation of the Lorentz local field. The main experimental observations are presented for the strongest band in the infrared spectrum of each of the liquids methanol, chlorobenzene, dichloromethane, and acetone. Particular care was paid to wave number calibration of both infrared and Raman spectra. Theoretical arguments indicate that the peak wave number in the αm″ spectrum is the correct one to use, because it is the only one that reflects the properties of molecules in their local environment in the liquid free from predictable long-range resonant dielectric effects. However, it is found that the comparison with Raman wave numbers is confused when the anisotropic local intermolecular forces and configuration in the liquid are significant. In these cases, the well known noncoincidence of the isotropic and anisotropic Raman scattering is observed, and the same factors lead to noncoincidence of the infrared and Raman bands.

  18. Thermodynamic-ensemble independence of solvation free energy.

    PubMed

    Chong, Song-Ho; Ham, Sihyun

    2015-02-10

    Solvation free energy is the fundamental thermodynamic quantity in solution chemistry. Recently, it has been suggested that the partial molar volume correction is necessary to convert the solvation free energy determined in different thermodynamic ensembles. Here, we demonstrate ensemble-independence of the solvation free energy on general thermodynamic grounds. Theoretical estimates of the solvation free energy based on the canonical or grand-canonical ensemble are pertinent to experiments carried out under constant pressure without any conversion.

  19. Experiments in Developing Wakes

    DTIC Science & Technology

    2014-04-27

    uncorrected and the right is corrected. The top row plots Mrms and the bottom row Wrms. All axis scales are the same. The measurements in a...water (dashed). Timesteps are M= 0, 0.52, 1.03, 1.89. The top row plots Mrms and the bottom row Wrm*. The uncertainty in velocity profile...reconstructions, which ought to be uniform vertically, is about 10% of the peak disturbance quantity. The profile from pure water does not differ

  20. Probing Supersymmetry with Neutral Current Scattering Experiments

    NASA Astrophysics Data System (ADS)

    Kurylov, A.; Ramsey-Musolf, M. J.; Su, S.

    2004-02-01

    We compute the supersymmetric contributions to the weak charges of the electron (QWe) and proton (QWp) in the framework of Minimal Supersymmetric Standard Model. We also consider the ratio of neutral current to charged current cross sections, R v and Rv¯ at v (v¯)-nucleus deep inelastic scattering, and compare the supersymmetric corrections with the deviations of these quantities from the Standard Model predictions implied by the recent NuTeV measurement.

  1. Dynamical generation of a repulsive vector contribution to the quark pressure

    NASA Astrophysics Data System (ADS)

    Restrepo, Tulio E.; Macias, Juan Camilo; Pinto, Marcus Benghi; Ferrari, Gabriel N.

    2015-03-01

    Lattice QCD results for the coefficient c2 appearing in the Taylor expansion of the pressure show that this quantity increases with the temperature towards the Stefan-Boltzmann limit. On the other hand, model approximations predict that when a vector repulsion, parametrized by GV, is present this coefficient reaches a maximum just after Tc and then deviates from the lattice predictions. Recently, this discrepancy has been used as a guide to constrain the (presently unknown) value of GV within the framework of effective models at large Nc (LN). In the present investigation we show that, due to finite Nc effects, c2 may also develop a maximum even when GV=0 since a vector repulsive term can be dynamically generated by exchange-type radiative corrections. Here we apply the optimized perturbation theory (OPT) method to the two-flavor Polyakov-Nambu-Jona-Lasinio model (at GV=0 ) and compare the results with those furnished by lattice simulations and by the LN approximation at GV=0 and also at GV≠0 . The OPT numerical results for c2 are impressively accurate for T ≲1.2 Tc but, as expected, they predict that this quantity develops a maximum at high T . After identifying the mathematical origin of this extremum we argue that such a discrepant behavior may naturally arise within this type of effective quark theories (at GV=0 ) whenever the first 1 /Nc corrections are taken into account. We then interpret this hypothesis as an indication that beyond the large-Nc limit the correct high-temperature (perturbative) behavior of c2 will be faithfully described by effective models only if they also mimic the asymptotic freedom phenomenon.

  2. Time Processing in Dyscalculia

    PubMed Central

    Cappelletti, Marinella; Freeman, Elliot D.; Butterworth, Brian L.

    2011-01-01

    To test whether atypical number development may affect other types of quantity processing, we investigated temporal discrimination in adults with developmental dyscalculia (DD). This also allowed us to test whether number and time may be sub-served by a common quantity system or decision mechanisms: if they do, both should be impaired in dyscalculia, but if number and time are distinct they should dissociate. Participants judged which of two successively presented horizontal lines was longer in duration, the first line being preceded by either a small or a large number prime (“1” or “9”) or by a neutral symbol (“#”), or in a third task participants decided which of two Arabic numbers (either “1,” “5,” “9”) lasted longer. Results showed that (i) DD’s temporal discriminability was normal as long as numbers were not part of the experimental design, even as task-irrelevant stimuli; however (ii) task-irrelevant numbers dramatically disrupted DD’s temporal discriminability the more their salience increased, though the actual magnitude of the numbers had no effect; in contrast (iii) controls’ time perception was robust to the presence of numbers but modulated by numerical quantity: therefore small number primes or numerical stimuli seemed to make durations appear shorter than veridical, but longer for larger numerical prime or numerical stimuli. This study is the first to show spared temporal discrimination – a dimension of continuous quantity – in a population with a congenital number impairment. Our data reinforce the idea of a partially shared quantity system across numerical and temporal dimensions, which supports both dissociations and interactions among dimensions; however, they suggest that impaired number in DD is unlikely to originate from systems initially dedicated to continuous quantity processing like time. PMID:22194731

  3. Time processing in dyscalculia.

    PubMed

    Cappelletti, Marinella; Freeman, Elliot D; Butterworth, Brian L

    2011-01-01

    To test whether atypical number development may affect other types of quantity processing, we investigated temporal discrimination in adults with developmental dyscalculia (DD). This also allowed us to test whether number and time may be sub-served by a common quantity system or decision mechanisms: if they do, both should be impaired in dyscalculia, but if number and time are distinct they should dissociate. Participants judged which of two successively presented horizontal lines was longer in duration, the first line being preceded by either a small or a large number prime ("1" or "9") or by a neutral symbol ("#"), or in a third task participants decided which of two Arabic numbers (either "1," "5," "9") lasted longer. Results showed that (i) DD's temporal discriminability was normal as long as numbers were not part of the experimental design, even as task-irrelevant stimuli; however (ii) task-irrelevant numbers dramatically disrupted DD's temporal discriminability the more their salience increased, though the actual magnitude of the numbers had no effect; in contrast (iii) controls' time perception was robust to the presence of numbers but modulated by numerical quantity: therefore small number primes or numerical stimuli seemed to make durations appear shorter than veridical, but longer for larger numerical prime or numerical stimuli. This study is the first to show spared temporal discrimination - a dimension of continuous quantity - in a population with a congenital number impairment. Our data reinforce the idea of a partially shared quantity system across numerical and temporal dimensions, which supports both dissociations and interactions among dimensions; however, they suggest that impaired number in DD is unlikely to originate from systems initially dedicated to continuous quantity processing like time.

  4. FY 2016 Status Report: CIRFT Testing Data Analyses and Updated Curvature Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Wang, Hong

    This report provides a detailed description of FY15 test result corrections/analysis based on the FY16 Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) test program methodology update used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal transportation conditions. The CIRFT consists of a U-frame testing setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages to a universal testing machine. The curvature of rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are used and clamped to the side connecting platesmore » of the U-frame to capture the deformation of the rod. The contact-based measurement, or three-LVDT-based curvature measurement system, on SNF rods has been proven to be quite reliable in CIRFT testing. However, how the LVDT head contacts the SNF rod may have a significant effect on the curvature measurement, depending on the magnitude and direction of rod curvature. It has been demonstrated that the contact/curvature issues can be corrected by using a correction on the sensor spacing. The sensor spacing defines the separation of the three LVDT probes and is a critical quantity in calculating the rod curvature once the deflections are obtained. The sensor spacing correction can be determined by using chisel-type probes. The method has been critically examined this year and has been shown to be difficult to implement in a hot cell environment, and thus cannot be implemented effectively. A correction based on the proposed equivalent gauge-length has the required flexibility and accuracy and can be appropriately used as a correction factor. The correction method based on the equivalent gauge length has been successfully demonstrated in CIRFT data analysis for the dynamic tests conducted on Limerick (LMK) (17 tests), North Anna (NA) (6 tests), and Catawba mixed oxide (MOX) (10 tests) SNF samples. These CIRFT tests were completed in FY14 and FY15. Specifically, the data sets obtained from measurement and monitoring were processed and analyzed. The fatigue life of rods has been characterized in terms of moment, curvature, and equivalent stress and strain..« less

  5. Refinement of the timing-based estimator of pulsar magnetic fields

    NASA Astrophysics Data System (ADS)

    Biryukov, Anton; Astashenok, Artyom; Beskin, Gregory

    2017-04-01

    Numerical simulations of realistic non-vacuum magnetospheres of isolated neutron stars have shown that pulsar spin-down luminosities depend weakly on the magnetic obliquity α. In particular, L ∝ B2(1 + sin 2α), where B is the magnetic field strength at the star surface. Being the most accurate expression to date, this result provides the opportunity to estimate B for a given radiopulsar with quite a high accuracy. In the current work, we present a refinement of the classical 'magneto-dipolar' formula for pulsar magnetic fields B_md = (3.2× 10^{19} G)√{P\\dot{P}}, where P is the neutron star spin period. The new, robust timing-based estimator is introduced as log B = log Bmd + ΔB(M, α), where the correction ΔB depends on the equation of state (EOS) of dense matter, the individual pulsar obliquity α and the mass M. Adopting state-of-the-art statistics for M and α we calculate the distributions of ΔB for a representative subset of 22 EOSs that do not contradict observations. It has been found that ΔB is distributed nearly normally, with the average in the range -0.5 to -0.25 dex and standard deviation σ[ΔB] ≈ 0.06 to 0.09 dex, depending on the adopted EOS. The latter quantity represents a formal uncertainty of the corrected estimation of log B because ΔB is weakly correlated with log Bmd. At the same time, if it is assumed that every considered EOS has the same chance of occurring in nature, then another, more generalized, estimator B* ≈ 3Bmd/7 can be introduced providing an unbiased value of the pulsar surface magnetic field with ˜30 per cent uncertainty with 68 per cent confidence. Finally, we discuss the possible impact of pulsar timing irregularities on the timing-based estimation of B and review the astrophysical applications of the obtained results.

  6. An Experimental Evaluation of Blockage Corrections for Current Turbines

    NASA Astrophysics Data System (ADS)

    Ross, Hannah; Polagye, Brian

    2017-11-01

    Flow confinement has been shown to significantly alter the performance of turbines that extract power from water currents. These performance effects are related to the degree of constraint, defined by the ratio of turbine projected area to channel cross-sectional area. This quantity is referred to as the blockage ratio. Because it is often desirable to adjust experimental observations in water channels to unconfined conditions, analytical corrections for both wind and current turbines have been derived. These are generally based on linear momentum actuator disk theory but have been applied to turbines without experimental validation. This work tests multiple blockage corrections on performance and thrust data from a cross-flow turbine and porous plates (experimental analogues to actuator disks) collected in laboratory flumes at blockage ratios ranging between 10 and 35%. To isolate the effects of blockage, the Reynolds number, Froude number, and submergence depth were held constant while the channel width was varied. Corrected performance data are compared to performance in a towing tank at a blockage ratio of less than 5%. In addition to examining the accuracy of each correction, underlying assumptions are assessed to determine why some corrections perform better than others. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082 and the Naval Facilities Engineering Command (NAVFAC).

  7. The Effect of Amount and Timing of Human Resources Data on Subsystem Design.

    ERIC Educational Resources Information Center

    Meister, David; And Others

    Human resources data (HRD) inputs often fail to influence system development. This study investigated the possibility that these inputs are sometimes deficient in quantity or timing. In addition, the effect upon design of different personnel quality and quantity requirements was analyzed. Equipment and HRD inputs which were produced during actual…

  8. Modality-independent representations of small quantities based on brain activation patterns.

    PubMed

    Damarla, Saudamini Roy; Cherkassky, Vladimir L; Just, Marcel Adam

    2016-04-01

    Machine learning or MVPA (Multi Voxel Pattern Analysis) studies have shown that the neural representation of quantities of objects can be decoded from fMRI patterns, in cases where the quantities were visually displayed. Here we apply these techniques to investigate whether neural representations of quantities depicted in one modality (say, visual) can be decoded from brain activation patterns evoked by quantities depicted in the other modality (say, auditory). The main finding demonstrated, for the first time, that quantities of dots were decodable by a classifier that was trained on the neural patterns evoked by quantities of auditory tones, and vice-versa. The representations that were common across modalities were mainly right-lateralized in frontal and parietal regions. A second finding was that the neural patterns in parietal cortex that represent quantities were common across participants. These findings demonstrate a common neuronal foundation for the representation of quantities across sensory modalities and participants and provide insight into the role of parietal cortex in the representation of quantity information. © 2016 Wiley Periodicals, Inc.

  9. Casual instrument corrections for short-period and broadband seismometers

    USGS Publications Warehouse

    Haney, Matthew M.; Power, John; West, Michael; Michaels, Paul

    2012-01-01

    Of all the filters applied to recordings of seismic waves, which include source, path, and site effects, the one we know most precisely is the instrument filter. Therefore, it behooves seismologists to accurately remove the effect of the instrument from raw seismograms. Applying instrument corrections allows analysis of the seismogram in terms of physical units (e.g., displacement or particle velocity of the Earth’s surface) instead of the output of the instrument (e.g., digital counts). The instrument correction can be considered the most fundamental processing step in seismology since it relates the raw data to an observable quantity of interest to seismologists. Complicating matters is the fact that, in practice, the term “instrument correction” refers to more than simply the seismometer. The instrument correction compensates for the complete recording system including the seismometer, telemetry, digitizer, and any anti‐alias filters. Knowledge of all these components is necessary to perform an accurate instrument correction. The subject of instrument corrections has been covered extensively in the literature (Seidl, 1980; Scherbaum, 1996). However, the prospect of applying instrument corrections still evokes angst among many seismologists—the authors of this paper included. There may be several reasons for this. For instance, the seminal paper by Seidl (1980) exists in a journal that is not currently available in electronic format and cannot be accessed online. Also, a standard method for applying instrument corrections involves the programs TRANSFER and EVALRESP in the Seismic Analysis Code (SAC) package (Goldstein et al., 2003). The exact mathematical methods implemented in these codes are not thoroughly described in the documentation accompanying SAC.

  10. Error Correcting Optical Mapping Data.

    PubMed

    Mukherjee, Kingshuk; Washimkar, Darshan; Muggli, Martin D; Salmela, Leena; Boucher, Christina

    2018-05-26

    Optical mapping is a unique system that is capable of producing high-resolution, high-throughput genomic map data that gives information about the structure of a genome [21]. Recently it has been used for scaffolding contigs and assembly validation for large-scale sequencing projects, including the maize [32], goat [6], and amborella [4] genomes. However, a major impediment in the use of this data is the variety and quantity of errors in the raw optical mapping data, which are called Rmaps. The challenges associated with using Rmap data are analogous to dealing with insertions and deletions in the alignment of long reads. Moreover, they are arguably harder to tackle since the data is numerical and susceptible to inaccuracy. We develop cOMET to error correct Rmap data, which to the best of our knowledge is the only optical mapping error correction method. Our experimental results demonstrate that cOMET has high prevision and corrects 82.49% of insertion errors and 77.38% of deletion errors in Rmap data generated from the E. coli K-12 reference genome. Out of the deletion errors corrected, 98.26% are true errors. Similarly, out of the insertion errors corrected, 82.19% are true errors. It also successfully scales to large genomes, improving the quality of 78% and 99% of the Rmaps in the plum and goat genomes, respectively. Lastly, we show the utility of error correction by demonstrating how it improves the assembly of Rmap data. Error corrected Rmap data results in an assembly that is more contiguous, and covers a larger fraction of the genome.

  11. Genuine cosmic hair

    NASA Astrophysics Data System (ADS)

    Kastor, David; Ray, Sourya; Traschen, Jennie

    2017-02-01

    We show that asymptotically future de Sitter (AFdS) spacetimes carry ‘genuine’ cosmic hair; information that is analogous to the mass and angular momentum of asymptotically flat spacetimes and that characterizes how an AFdS spacetime approaches its asymptotic form. We define new ‘cosmological tension’ charges associated with future asymptotic spatial translation symmetries, which are analytic continuations of the ADM mass and tensions of asymptotically planar AdS spacetimes, and which measure the leading anisotropic corrections to the isotropic, exponential de Sitter expansion rate. A cosmological Smarr relation, holding for AFdS spacetimes having exact spatial translation symmetry, is derived. This formula relates cosmological tension, which is evaluated at future infinity, to properties of the cosmology at early times, together with a ‘cosmological volume’ contribution that is analogous to the thermodynamic volume of AdS black holes. Smarr relations for different spatial directions imply that the difference in expansion rates between two directions at late times is related in a simple way to their difference at early times. Hence information about the very early universe can be inferred from cosmic hair, which is potentially observable in a late time de Sitter phase. Cosmological tension charges and related quantities are evaluated for Kasner-de Sitter spacetimes, which serve as our primary examples.

  12. Vibrations of a Mindlin plate subjected to a pair of inertial loads moving in opposite directions

    NASA Astrophysics Data System (ADS)

    Dyniewicz, Bartłomiej; Pisarski, Dominik; Bajer, Czesław I.

    2017-01-01

    A Mindlin plate subjected to a pair of inertial loads traveling at a constant high speed in opposite directions along arbitrary trajectory, straight or curved, is presented. The masses represent vehicles passing a bridge or track plates. A numerical solution is obtained using the space-time finite element method, since it allows a clear and simple derivation of the characteristic matrices of the time-stepping procedure. The transition from one spatial finite element to another must be energetically consistent. In the case of the moving inertial load the classical time-integration schemes are methodologically difficult, since we consider the Dirac delta term with a moving argument. The proposed numerical approach provides the correct definition of force equilibrium in the time interval. The given approach closes the problem of the numerical analysis of vibration of a structure subjected to inertial loads moving arbitrarily with acceleration. The results obtained for a massless and an inertial load traveling over a Mindlin plate at various speeds are compared with benchmark results obtained for a Kirchhoff plate. The pair of inertial forces traveling in opposite directions causes displacements and stresses more than twice as large as their corresponding quantities observed for the passage of a single mass.

  13. Towards Holography via Quantum Source-Channel Codes.

    PubMed

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-14

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  14. Towards Holography via Quantum Source-Channel Codes

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  15. Relative quantity judgments in South American sea lions (Otaria flavescens).

    PubMed

    Abramson, José Z; Hernández-Lloreda, Victoria; Call, Josep; Colmenares, Fernando

    2011-09-01

    There is accumulating evidence that a variety of species possess quantitative abilities although their cognitive substrate is still unclear. This study is the first to investigate whether sea lions (Otaria flavescens), in the absence of training, are able to assess and select the larger of two sets of quantities. In Experiment 1, the two sets of quantities were presented simultaneously as whole sets, that is, the subjects could compare them directly. In Experiment 2, the two sets of quantities were presented item-by-item, and the totality of items was never visually available at the time of choice. For each type of presentation, we analysed the effect of the ratio between quantities, the difference between quantities and the total number of items presented. The results showed that (1) sea lions can make relative quantity judgments successfully and (2) there is a predominant influence of the ratio between quantities on the subjects' performance. The latter supports the idea that an analogue representational mechanism is responsible for sea lions' relative quantities judgments. These findings are consistent with previous reports of relative quantities judgments in other species such as monkeys and apes and suggest that sea lions might share a similar mechanism to compare and represent quantities.

  16. Estimation of Bid Curves in Power Exchanges using Time-varying Simultaneous-Equations Models

    NASA Astrophysics Data System (ADS)

    Ofuji, Kenta; Yamaguchi, Nobuyuki

    Simultaneous-equations model (SEM) is generally used in economics to estimate interdependent endogenous variables such as price and quantity in a competitive, equilibrium market. In this paper, we have attempted to apply SEM to JEPX (Japan Electric Power eXchange) spot market, a single-price auction market, using the publicly available data of selling and buying bid volumes, system price and traded quantity. The aim of this analysis is to understand the magnitude of influences to the auctioned prices and quantity from the selling and buying bids, than to forecast prices and quantity for risk management purposes. In comparison with the Ordinary Least Squares (OLS) estimation where the estimation results represent average values that are independent of time, we employ a time-varying simultaneous-equations model (TV-SEM) to capture structural changes inherent in those influences, using State Space models with Kalman filter stepwise estimation. The results showed that the buying bid volumes has that highest magnitude of influences among the factors considered, exhibiting time-dependent changes, ranging as broad as about 240% of its average. The slope of the supply curve also varies across time, implying the elastic property of the supply commodity, while the demand curve remains comparatively inelastic and stable over time.

  17. An Investigation of the Correlation of Water-Ice and Dust Retrievals Via the MGS TES Data Set

    NASA Technical Reports Server (NTRS)

    Qu, Z.; Tamppari, L. K.; Smith, M. D.; Bass, Deborah; Hale, A. S.

    2004-01-01

    Water-ice in the Martian atmosphere was first identified in the Mariner 9 Infrared Interferometer Spectrometer (IRIS) spectra. The Viking Imaging Subsystem (VIS) instruments aboard the Viking orbiter also observed water-ice clouds and hazes in the Martian atmosphere. The MGS TES instrument is an infrared inferometer/spectrometer which covers the spectral range 6-50 micron with a selectable sampling resolution of either 5 or 10 per cm. Using the relatively independent and distinct spectral signatures for dust and water-ice, these two retrieved quantities have been retrieved simultaneously. Although the interrelations among the two quantities have been analyzed by Smith et al. and the retrievals are thought to be robust, understanding the impact of each quantity on the other during their retrievals as well as the impact from the surface for retrievals is important for correctly interpreting the science, and therefore requires close examination. An understanding of the correlation or a-correlation between dust and water-ice would aid in understanding the physical processes responsible for the transport of aerosols in the Martian atmosphere. In this presentation, we present an investigation of the correlation between water-ice and dust in the MGS TES data set.

  18. Quark–hadron phase structure, thermodynamics, and magnetization of QCD matter

    NASA Astrophysics Data System (ADS)

    Nasser Tawfik, Abdel; Magied Diab, Abdel; Hussein, M. T.

    2018-05-01

    The SU(3) Polyakov linear-sigma model (PLSM) is systematically implemented to characterize the quark-hadron phase structure and to determine various thermodynamic quantities and the magnetization of quantum chromodynamic (QCD) matter. Using mean-field approximation, the dependence of the chiral order parameter on a finite magnetic field is also calculated. Under a wide range of temperatures and magnetic field strengths, various thermodynamic quantities including trace anomaly, speed of sound squared, entropy density, and specific heat are presented, and some magnetic properties are described as well. Where available these results are compared to recent lattice QCD calculations. The temperature dependence of these quantities confirms our previous finding that the transition temperature is reduced with the increase in the magnetic field strength, i.e. QCD matter is characterized by an inverse magnetic catalysis. Furthermore, the temperature dependence of the magnetization showing that QCD matter has paramagnetic properties slightly below and far above the pseudo-critical temperature is confirmed as well. The excellent agreement with recent lattice calculations proves that our QCD-like approach (PLSM) seems to possess the correct degrees of freedom in both the hadronic and partonic phases and describes well the dynamics deriving confined hadrons to deconfined quark-gluon plasma.

  19. A discussion on velocity-speed and their instruction

    NASA Astrophysics Data System (ADS)

    Yıldız, Ali

    2016-04-01

    This study was conducted to investigate how to teach velocity and speed effectively, with which activities and examples. Although they are different quantities, they are generally used in the same meaning. Study data and the quantities discussed were obtained from the examination of documents such as scientific articles and books about the instruction and they were examined by descriptive analysis approach. Expository instruction was supported with writing to learn activities and an approach actualized in seven stages was suggested so that velocity and speed could be understood at an anticipated level. At each stage, possible practices were explained; and especially at the fifth stage of the study, a detailed example on distance, displacement, velocity and speed promoted the understanding of presented quantities much more easily and correctly with their critical properties, thus students would be able to associate it with their prior knowledge. Moreover, it was anticipated based on these reasons that the example could be used as a tool to actualize permanent learning. At the last stage of the study, it was considered that having students write a letter and a summary to young respondents could support the practices in the previous stages, and also it would help promoting long-term retention of the learned concepts.

  20. Weather and Atmospheric Effects on the Measurement and Use of Electro-Optical Signature Data

    DTIC Science & Technology

    2017-02-01

    and the problem of correcting and applying measured data. It provides glossaries of electro-optical and weather terms related to EO/ IR test... IR infrared LWIR long-wave infrared MG Meteorology Group mm millimeter MWIR mid-wave infrared NIR near infrared nm nanometer O2 oxygen O3...applying measured data. It provides glossaries of EO and weather terms related to EO/infrared ( IR ) test environments (parameters, quantity names, symbols

  1. MSFC Skylab Apollo Telescope Mount experiment systems mission evaluation

    NASA Technical Reports Server (NTRS)

    White, A. F., Jr.

    1974-01-01

    A detailed evaluation is presented of the Skylab Apollo Telescope Mount experiments performance throughout the eight and one-half month Skylab Mission. Descriptions and the objectives of each instrument are included. The anomalies experienced, the causes, and corrective actions taken are discussed. Conclusions, based on evaluation of the performance of each instrument, are presented. Examples of the scientific data obtained, as well as a discussion of the quality and quantity of the data, are presented.

  2. Archaeological Investigations in Upper McNary Reservoir: 1981-1982.

    DTIC Science & Technology

    1983-01-01

    Sokulk have been equated with the ethnographic Wanapum (Smith 1982). In 1811 David Thompson of the British North West Company and Alexander Ross traveled...subdivided into three sub-clusters. It is not correct to statistically equate this solution to that of 11 clusters (8 original and the 3 subdivisions of...accept the assumption that increases in the quantity of materials roughly equate with increased use of an area. The average number of items per 50 m

  3. Homespun remedy, homespun toxicity: baking soda ingestion for dyspepsia.

    PubMed

    Ajbani, Keyur; Chansky, Michael E; Baumann, Brigitte M

    2011-04-01

    A 68-year-old man presented to the Emergency Department with a severe metabolic alkalosis after ingesting large quantities of baking soda to treat his dyspepsia. His underlying pulmonary disease and a progressively worsening mental status necessitated intubation for respiratory failure. Laboratory studies revealed a hyponatremic, hypochloremic, hypokalemic metabolic alkalosis. The patient was successfully treated after cessation of the oral bicarbonate, initiation of intravenous hydration, and correction of electrolyte abnormalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Dynamic ductile fracture of a central crack

    NASA Technical Reports Server (NTRS)

    Tsai, Y. M.

    1976-01-01

    A central crack, symmetrically growing at a constant speed in a two dimensional ductile material subject to uniform tension at infinity, is investigated using the integral transform methods. The crack is assumed to be the Dugdale crack, and the finite stress condition at the crack tip is satisfied during the propagation of the crack. Exact expressions of solution are obtained for the finite stress condition at the crack tip, the crack shape, the crack opening displacement, and the energy release rate. All those expressions are written as the product of explicit dimensional quantities and a nondimensional dynamic correction function. The expressions reduce to the associated static results when the crack speed tends to zero, and the nondimensional dynamic correction functions were calculated for various values of the parameter involved.

  5. Accurate condensed history Monte Carlo simulation of electron transport. II. Application to ion chamber response simulations.

    PubMed

    Kawrakow, I

    2000-03-01

    In this report the condensed history Monte Carlo simulation of electron transport and its application to the calculation of ion chamber response is discussed. It is shown that the strong step-size dependencies and lack of convergence to the correct answer previously observed are the combined effect of the following artifacts caused by the EGS4/PRESTA implementation of the condensed history technique: dose underprediction due to PRESTA'S pathlength correction and lateral correlation algorithm; dose overprediction due to the boundary crossing algorithm; dose overprediction due to the breakdown of the fictitious cross section method for sampling distances between discrete interaction and the inaccurate evaluation of energy-dependent quantities. These artifacts are now understood quantitatively and analytical expressions for their effect are given.

  6. Correcting Velocity Dispersion Measurements for Inclination and Implications for the M-Sigma Relation

    NASA Astrophysics Data System (ADS)

    Bellovary, Jillian M.; Holley-Bockelmann, Kelly; Gultekin, Kayhan; Christensen, Charlotte; Governato, Fabio

    2015-01-01

    The relation between central black hole mass and stellar spheroid velocity dispersion (the M-Sigma relation) is one of the best-known correlations linking black holes and their host galaxies. However, there is a large amount of scatter at the low-mass end, indicating that the processes that relate black holes to lower-mass hosts are not straightforward. Some of this scatter can be explained by inclination effects; contamination from disk stars along the line of sight can artificially boost velocity dispersion measurements by 30%. Using state of the art simulations, we have developed a correction factor for inclination effects based on purely observational quantities. We present the results of applying these factors to observed samples of galaxies and discuss the effects on the M-Sigma relation.

  7. Critical Analysis of the Mathematical Formalism of Theoretical Physics. II. Foundations of Vector Calculus

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2014-03-01

    A critical analysis of the foundations of standard vector calculus is proposed. The methodological basis of the analysis is the unity of formal logic and of rational dialectics. It is proved that the vector calculus is incorrect theory because: (a) it is not based on a correct methodological basis - the unity of formal logic and of rational dialectics; (b) it does not contain the correct definitions of ``movement,'' ``direction'' and ``vector'' (c) it does not take into consideration the dimensions of physical quantities (i.e., number names, denominate numbers, concrete numbers), characterizing the concept of ''physical vector,'' and, therefore, it has no natural-scientific meaning; (d) operations on ``physical vectors'' and the vector calculus propositions relating to the ''physical vectors'' are contrary to formal logic.

  8. Quantum Physics and Mathematical Debates Concerning the Problem of the Ontological Priority between Continuous Quantity and Discrete Quantity

    NASA Astrophysics Data System (ADS)

    Pin, Victor Gómez

    In his book about the Categories (that is about the ultimate elements of classification and order), in the chapter concerning the quantity (IV, 20) Aristotle says that this concept recovers two kinds of modalities: the discrete quantity and the continuous quantity and he gives as examples the number for the first one; line, surface, solid, times and space for the second one. The main philosophical problem raised by this text is to determine which of the two modalities of the quantity has the ontological priority over the other (given two concepts A and B, we assume that A has ontological priority over B if every entity that possesses the quality B possesses necessarily the quality A). The problem is magnified by the fact that space, which in some part of Aristotle's Physics is mentioned not only as a category properly speaking but even as the main category whose power can be amazing, is in the evoked text of the Categories's Book reduced to expression of the continuum, and sharing this condition with time. In this matter the controversy is constant through the common history of Science and Philosophy.

  9. Ten Years on: Does Graduate Student Promise Predict Later Scientific Achievement?

    ERIC Educational Resources Information Center

    Haslam, Nick; Laham, Simon M.

    2009-01-01

    We examined publication records of 60 social psychologists to determine whether publication record at the time of the PhD (t0) predicted scientific achievement (publication quantity, quality, and impact) ten years later (t10). Publication quantity and quality each correlated moderately across this time-span. Productivity and impact at t10 were…

  10. Topology Counts: Force Distributions in Circular Spring Networks.

    PubMed

    Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max

    2018-02-09

    Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.

  11. Holographic definition of points and distances

    NASA Astrophysics Data System (ADS)

    Czech, Bartłomiej; Lamprou, Lampros

    2014-11-01

    We discuss the way in which field theory quantities assemble the spatial geometry of three-dimensional anti-de Sitter space (AdS3). The field theory ingredients are the entanglement entropies of boundary intervals. A point in AdS3 corresponds to a collection of boundary intervals which is selected by a variational principle we discuss. Coordinates in AdS3 are integration constants of the resulting equation of motion. We propose a distance function for this collection of points, which obeys the triangle inequality as a consequence of the strong subadditivity of entropy. Our construction correctly reproduces the static slice of AdS3 and the Ryu-Takayanagi relation between geodesics and entanglement entropies. We discuss how these results extend to quotients of AdS3 —the conical defect and the BTZ geometries. In these cases, the set of entanglement entropies must be supplemented by other field theory quantities, which can carry the information about lengths of nonminimal geodesics.

  12. Using a two-lens afocal compensator for thermal defocus correction of catadioptric system

    NASA Astrophysics Data System (ADS)

    Ivanov, S. E.; Romanova, G. E.; Bakholdin, A. V.

    2017-08-01

    The work associates with the catadioptric systems with two-component afocal achromatic compensator. The most catadioptric systems with afocal compensator have the power mirror part and the correctional lens part. The correctional lens part can be in parallel, in convergent beam or in both. One of the problems of such systems design is the thermal defocus by reason of the thermal aberration and the housing thermal expansion. We introduce the technique of thermal defocus compensation by choosing the optical material of the afocal compensator components. The components should be made from the optical materials with thermo-optical characteristics so after temperature changing the compensator should become non-afocal with the optical power enough to compensate the image plane thermal shift. Abbe numbers of the components should also have certain values for correction chromatic aberrations that reduces essentially the applicable optical materials quantity. The catalogues of the most vendors of optical materials in visible spectral range are studied for the purpose of finding the suitable couples for the technique. As a result, the advantages and possibilities of the plastic materials application in combination with optical glasses are shown. The examples of the optical design are given.

  13. First all-in-one diagnostic tool for DNA intelligence: genome-wide inference of biogeographic ancestry, appearance, relatedness, and sex with the Identitas v1 Forensic Chip.

    PubMed

    Keating, Brendan; Bansal, Aruna T; Walsh, Susan; Millman, Jonathan; Newman, Jonathan; Kidd, Kenneth; Budowle, Bruce; Eisenberg, Arthur; Donfack, Joseph; Gasparini, Paolo; Budimlija, Zoran; Henders, Anjali K; Chandrupatla, Hareesh; Duffy, David L; Gordon, Scott D; Hysi, Pirro; Liu, Fan; Medland, Sarah E; Rubin, Laurence; Martin, Nicholas G; Spector, Timothy D; Kayser, Manfred

    2013-05-01

    When a forensic DNA sample cannot be associated directly with a previously genotyped reference sample by standard short tandem repeat profiling, the investigation required for identifying perpetrators, victims, or missing persons can be both costly and time consuming. Here, we describe the outcome of a collaborative study using the Identitas Version 1 (v1) Forensic Chip, the first commercially available all-in-one tool dedicated to the concept of developing intelligence leads based on DNA. The chip allows parallel interrogation of 201,173 genome-wide autosomal, X-chromosomal, Y-chromosomal, and mitochondrial single nucleotide polymorphisms for inference of biogeographic ancestry, appearance, relatedness, and sex. The first assessment of the chip's performance was carried out on 3,196 blinded DNA samples of varying quantities and qualities, covering a wide range of biogeographic origin and eye/hair coloration as well as variation in relatedness and sex. Overall, 95 % of the samples (N = 3,034) passed quality checks with an overall genotype call rate >90 % on variable numbers of available recorded trait information. Predictions of sex, direct match, and first to third degree relatedness were highly accurate. Chip-based predictions of biparental continental ancestry were on average ~94 % correct (further support provided by separately inferred patrilineal and matrilineal ancestry). Predictions of eye color were 85 % correct for brown and 70 % correct for blue eyes, and predictions of hair color were 72 % for brown, 63 % for blond, 58 % for black, and 48 % for red hair. From the 5 % of samples (N = 162) with <90 % call rate, 56 % yielded correct continental ancestry predictions while 7 % yielded sufficient genotypes to allow hair and eye color prediction. Our results demonstrate that the Identitas v1 Forensic Chip holds great promise for a wide range of applications including criminal investigations, missing person investigations, and for national security purposes.

  14. Telepharmacy and bar-code technology in an i.v. chemotherapy admixture area.

    PubMed

    O'Neal, Brian C; Worden, John C; Couldry, Rick J

    2009-07-01

    A program using telepharmacy and bar-code technology to increase the presence of the pharmacist at a critical risk point during chemotherapy preparation is described. Telepharmacy hardware and software were acquired, and an inspection camera was placed in a biological safety cabinet to allow the pharmacy technician to take digital photographs at various stages of the chemotherapy preparation process. Once the pharmacist checks the medication vials' agreement with the work label, the technician takes the product into the biological safety cabinet, where the appropriate patient is selected from the pending work list, a queue of patient orders sent from the pharmacy information system. The technician then scans the bar code on the vial. Assuming the bar code matches, the technician photographs the work label, vials, diluents and fluids to be used, and the syringe (before injecting the contents into the bag) along with the vial. The pharmacist views all images as a part of the final product-checking process. This process allows the pharmacist to verify that the correct quantity of medication was transferred from the primary source to a secondary container without being physically present at the time of transfer. Telepharmacy and bar coding provide a means to improve the accuracy of chemotherapy preparation by decreasing the likelihood of using the incorrect product or quantity of drug. The system facilitates the reading of small product labels and removes the need for a pharmacist to handle contaminated syringes and vials when checking the final product.

  15. Satellite-based high-resolution mapping of rainfall over southern Africa

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Drönner, Johannes; Nauss, Thomas

    2017-06-01

    A spatially explicit mapping of rainfall is necessary for southern Africa for eco-climatological studies or nowcasting but accurate estimates are still a challenging task. This study presents a method to estimate hourly rainfall based on data from the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). Rainfall measurements from about 350 weather stations from 2010-2014 served as ground truth for calibration and validation. SEVIRI and weather station data were used to train neural networks that allowed the estimation of rainfall area and rainfall quantities over all times of the day. The results revealed that 60 % of recorded rainfall events were correctly classified by the model (probability of detection, POD). However, the false alarm ratio (FAR) was high (0.80), leading to a Heidke skill score (HSS) of 0.18. Estimated hourly rainfall quantities were estimated with an average hourly correlation of ρ = 0. 33 and a root mean square error (RMSE) of 0.72. The correlation increased with temporal aggregation to 0.52 (daily), 0.67 (weekly) and 0.71 (monthly). The main weakness was the overestimation of rainfall events. The model results were compared to the Integrated Multi-satellitE Retrievals for GPM (IMERG) of the Global Precipitation Measurement (GPM) mission. Despite being a comparably simple approach, the presented MSG-based rainfall retrieval outperformed GPM IMERG in terms of rainfall area detection: GPM IMERG had a considerably lower POD. The HSS was not significantly different compared to the MSG-based retrieval due to a lower FAR of GPM IMERG. There were no further significant differences between the MSG-based retrieval and GPM IMERG in terms of correlation with the observed rainfall quantities. The MSG-based retrieval, however, provides rainfall in a higher spatial resolution. Though estimating rainfall from satellite data remains challenging, especially at high temporal resolutions, this study showed promising results towards improved spatio-temporal estimates of rainfall over southern Africa.

  16. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    NASA Astrophysics Data System (ADS)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  17. Coupling Monte Carlo simulations with thermal analysis for correcting microdosimetric spectra from a novel micro-calorimeter

    NASA Astrophysics Data System (ADS)

    Fathi, K.; Galer, S.; Kirkby, K. J.; Palmans, H.; Nisbet, A.

    2017-11-01

    The high uncertainty in the Relative Biological Effectiveness (RBE) values of particle therapy beam, which are used in combination with the quantity absorbed dose in radiotherapy, together with the increase in the number of particle therapy centres worldwide necessitate a better understating of the biological effect of such modalities. The present novel study is part of performance testing and development of a micro-calorimeter based on Superconducting QUantum Interference Devices (SQUIDs). Unlike other microdosimetric detectors that are used for investigating the energy distribution, this detector provides a direct measurement of energy deposition at the micrometre scale, that can be used to improve our understanding of biological effects in particle therapy application, radiation protection and environmental dosimetry. Temperature rises of less than 1μK are detectable and when combined with the low specific heat capacity of the absorber at cryogenic temperature, extremely high energy deposition sensitivity of approximately 0.4 eV can be achieved. The detector consists of 3 layers: tissue equivalent (TE) absorber, superconducting (SC) absorber and silicon substrate. Ideally all energy would be absorbed in the TE absorber and heat rise in the superconducting layer would arise due to heat conduction from the TE layer. However, in practice direct particle absorption occurs in all 3 layers and must be corrected for. To investigate the thermal behaviour within the detector, and quantify any possible correction, particle tracks were simulated employing Geant4 (v9.6) Monte Carlo simulations. The track information was then passed to the COMSOL Multiphysics (Finite Element Method) software. The 3D heat transfer within each layer was then evaluated in a time-dependent model. For a statistically reliable outcome, the simulations had to be repeated for a large number of particles. An automated system has been developed that couples Geant4 Monte Carlo output to COMSOL for determining the expected distribution of proton tracks and their thermal contribution within the detector. The correction factor for a 3.8 MeV proton pencil beam was determined and applied to the expected spectra. The corrected microdosimetric spectra was shown to have a good agreement with the ideal spectra.

  18. The relation between visualization size, grouping, and user performance.

    PubMed

    Gramazio, Connor C; Schloss, Karen B; Laidlaw, David H

    2014-12-01

    In this paper we make the following contributions: (1) we describe how the grouping, quantity, and size of visual marks affects search time based on the results from two experiments; (2) we report how search performance relates to self-reported difficulty in finding the target for different display types; and (3) we present design guidelines based on our findings to facilitate the design of effective visualizations. Both Experiment 1 and 2 asked participants to search for a unique target in colored visualizations to test how the grouping, quantity, and size of marks affects user performance. In Experiment 1, the target square was embedded in a grid of squares and in Experiment 2 the target was a point in a scatterplot. Search performance was faster when colors were spatially grouped than when they were randomly arranged. The quantity of marks had little effect on search time for grouped displays ("pop-out"), but increasing the quantity of marks slowed reaction time for random displays. Regardless of color layout (grouped vs. random), response times were slowest for the smallest mark size and decreased as mark size increased to a point, after which response times plateaued. In addition to these two experiments we also include potential application areas, as well as results from a small case study where we report preliminary findings that size may affect how users infer how visualizations should be used. We conclude with a list of design guidelines that focus on how to best create visualizations based on grouping, quantity, and size of visual marks.

  19. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Community corrections center good time...

  20. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Community corrections center good time...

  1. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Community corrections center good time...

  2. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Community corrections center good time...

  3. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Community corrections center good time...

  4. Comparison of Tissue Density in Hounsfield Units in Computed Tomography and Cone Beam Computed Tomography.

    PubMed

    Varshowsaz, Masoud; Goorang, Sepideh; Ehsani, Sara; Azizi, Zeynab; Rahimian, Sepideh

    2016-03-01

    Bone quality and quantity assessment is one of the most important steps in implant treatment planning. Different methods such as computed tomography (CT) and recently suggested cone beam computed tomography (CBCT) with lower radiation dose and less time and cost are used for bone density assessment. This in vitro study aimed to compare the tissue density values in Hounsfield units (HUs) in CBCT and CT scans of different tissue phantoms with two different thicknesses, two different image acquisition settings and in three locations in the phantoms. Four different tissue phantoms namely hard tissue, soft tissue, air and water were scanned by three different CBCT and a CT system in two thicknesses (full and half) and two image acquisition settings (high and low kVp and mA). The images were analyzed at three sites (middle, periphery and intermediate) using eFilm software. The difference in density values was analyzed by ANOVA and correction coefficient test (P<0.05). There was a significant difference between density values in CBCT and CT scans in most situations, and CBCT values were not similar to CT values in any of the phantoms in different thicknesses and acquisition parameters or the three different sites. The correction coefficients confirmed the results. CBCT is not reliable for tissue density assessment. The results were not affected by changes in thickness, acquisition parameters or locations.

  5. Measurement of LYSO Intrinsic Light Yield Using Electron Excitation

    NASA Astrophysics Data System (ADS)

    Turtos, Rosana Martinez; Gundacker, Stefan; Pizzichemi, Marco; Ghezzi, Alessio; Pauwels, Kristof; Auffray, Etiennette; Lecoq, Paul; Paganoni, Marco

    2016-04-01

    The determination of the intrinsic light yield (LYint) of scintillating crystals, i.e. number of optical photons created per amount of energy deposited, constitutes a key factor in order to characterize and optimize their energy and time resolution. However, until now measurements of this quantity are affected by large uncertainties and often rely on corrections for bulk absorption and surface/edge state. The novel idea presented in this contribution is based on the confinement of the scintillation emission in the central upper part of a 10 mm cubic crystal using a 1.5 MeV electron beam with diameter of 1 mm. A black non-reflective pinhole aligned with the excitation point is used to fix the light extraction solid angle (narrower than total reflection angle), which then sets a light cone travel path through the crystal. The final number of photoelectrons detected using a Hamamatsu R2059 photomultiplier tube (PMT) was corrected for the extraction solid angle, the Fresnel reflection coefficient and quantum efficiency (QE) of the PMT. The total number of optical photons produced per energy deposited was found to be 40000 ph/MeV ± 9% (syst) ±3% (stat) for LYSO. Simulations using Geant4 were successfully compared to light output measurements of 2 × 2 mm2 section crystals with lengths of 5-30 mm, in order to validate the light transport model and set a limit on Light Transfer Efficiency estimations.

  6. Stability of Gradient Field Corrections for Quantitative Diffusion MRI.

    PubMed

    Rogers, Baxter P; Blaber, Justin; Welch, E Brian; Ding, Zhaohua; Anderson, Adam W; Landman, Bennett A

    2017-02-11

    In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fields, we predicted the obtained b-values and applied gradient directions throughout a typical field of view for brain imaging for a typical 32-direction diffusion imaging sequence. We measured the stability of these predictions over time. At 80 mm from scanner isocenter, predicted b-value was 1-6% different than intended due to gradient nonlinearity, and predicted gradient directions were in error by up to 1 degree. Over the course of one month the change in these quantities due to calibration-related factors such as scanner drift and variation in phantom placement was <0.5% for b-values, and <0.5 degrees for angular deviation. The proposed calibration procedure allows the estimation of gradient nonlinearity to correct b-values and gradient directions ahead of advanced diffusion image processing for high angular resolution data, and requires only a five-minute phantom scan that can be included in a weekly or monthly quality assurance protocol.

  7. Superficial Enhanced Fluid Fat Injection (SEFFI) to Correct Volume Defects and Skin Aging of the Face and Periocular Region.

    PubMed

    Bernardini, Francesco P; Gennai, Alessandro; Izzo, Luigi; Zambelli, Alessandra; Repaci, Erica; Baldelli, Ilaria; Fraternali-Orcioni, G; Hartstein, Morris E; Santi, Pier Luigi; Quarto, Rodolfo

    2015-07-01

    Although recent research on micro fat has shown the potential advantages of superficial implantation and high stem cell content, clinical applications thus far have been limited. The authors report their experience with superficial enhanced fluid fat injection (SEFFI) for the correction of volume loss and skin aging of the face in general and in the periocular region. The finer SEFFI preparation (0.5 mL) was injected into the orbicularis in the periorbital and perioral areas, and the 0.8-mL preparation was injected subdermally elsewhere in the face. The records of 98 consecutive patients were reviewed. Average follow-up time was 6 months, and average volume of implanted fat was 20 mL and 51.4 mL for the 0.5-mL and 0.8-mL preparations, respectively. Good or excellent results were achieved for volume restoration and skin improvement in all patients. Complications were minor and included an oil cyst in 3 patients. The smaller SEFFI quantity (0.5 mL) was well suited to correct volume loss in the eyelids, especially the deep upper sulcus and tear trough, whereas the larger SEFFI content was effective for larger volume deficits in other areas of the face, including the brow, temporal fossa, zygomatic-malar region, nasolabial folds, marionette lines, chin, and lips. The fat administered by SEFFI is easily harvested via small side-port cannulae, yielding micro fat that is rich in viable adipocytes and stem cells. Both volumes of fat (0.5 mL and 0.8 mL) were effective for treating age-related lipoatrophy, reducing facial rhytids, and improving skin quality. 4 Therapeutic. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  8. The Coast Artillery Journal. Volume 72, Number 3, March 1930

    DTIC Science & Technology

    1930-03-01

    computation of the score. A careful study of these instructions is necessary in order that the score be correctly computed. The quantities substituted...components: A , B, C, D; the score for a practice is A +B+C- D. D, the penalty co.mpo.nent,is always subtracted. Band C alsoma:r, in extreme cases ,be... studying the subject, and ’Ourown Navy will have a receiving set in every airplane and a two-wayset in each commandplane. In addition to ather

  9. Relative quantity judgments between discrete spatial arrays by chimpanzees (Pan troglodytes) and New Zealand robins (Petroica longipes).

    PubMed

    Garland, Alexis; Beran, Michael J; McIntyre, Joseph; Low, Jason

    2014-08-01

    Quantity discrimination for items spread across spatial arrays was investigated in chimpanzees (Pan troglodytes) and North Island New Zealand robins (Petroica longipes), with the aim of examining the role of spatial separation on the ability of these 2 species to sum and compare nonvisible quantities which are both temporally and spatially separated, and to assess the likely mechanism supporting such summation performance. Birds and chimpanzees compared 2 sets of discrete quantities of items that differed in number. Six quantity comparisons were presented to both species: 1v2, 1v3, 1v5, 2v3, 2v4, and 2v5. Each was distributed 1 at a time across 2 7-location arrays. Every individual item was viewed 1 at a time and hidden, with no more than a single item in each location of an array, in contrast to a format where all items were placed together into 2 single locations. Subjects responded by selecting 1 of the 2 arrays and received the entire quantity of food items hidden within that array. Both species performed better than chance levels. The ratio of items between sets was a significant predictor of performance in the chimpanzees, but it was not significant for robins. Instead, the absolute value of the smaller quantity of items presented was the significant factor in robin responses. These results suggest a species difference for this task when considering various dimensions such as ratio or total number of items in quantity comparisons distributed across discrete 7-location arrays.

  10. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE PAGES

    Lockhart, M.; Henzlova, D.; Croft, S.; ...

    2017-09-20

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  11. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, M.; Henzlova, D.; Croft, S.

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  12. Reconceptualizing perceptual load as a rate problem: The role of time in the allocation of selective attention.

    PubMed

    Li, Zhi; Xin, Keyun; Li, Wei; Li, Yanzhe

    2018-04-30

    In the literature about allocation of selective attention, a widely studied question is when will attention be allocated to information that is clearly irrelevant to the task at hand. The present study, by using convergent evidence, demonstrated that there is a trade-off between quantity of information present in a display and the time allowed to process it. Specifically, whether or not there is interference from irrelevant distractors depends not only on the amount of information present, but also on the amount of time allowed to process that information. When processing time is calibrated to the amount of information present, irrelevant distractors can be selectively ignored successfully. These results suggest that the perceptual load in the load theory of selective attention (i.e., Lavie, 2005) should be thought about as a dynamic rate problem rather than a static capacity limitation. The authors thus propose that rather than conceiving of perceptual load as a quantity of information, they should consider it as a quantity of information per unit of time. In other words, it is the relationship between the quantity of information in the task and the time for processing the information that determines the allocation of selective attention. Thus, the present findings extended load theory, allowing it to explain findings that were previously considered as counter evidence of load theory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Conformal killing tensors and covariant Hamiltonian dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cariglia, M., E-mail: marco@iceb.ufop.br; Gibbons, G. W., E-mail: G.W.Gibbons@damtp.cam.ac.uk; LE STUDIUM, Loire Valley Institute for Advanced Studies, Tours and Orleans

    2014-12-15

    A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector formore » planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.« less

  14. Micro-Pulse Lidar Signals: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)

    2002-01-01

    Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.

  15. Second-order Boltzmann equation: gauge dependence and gauge invariance

    NASA Astrophysics Data System (ADS)

    Naruko, Atsushi; Pitrou, Cyril; Koyama, Kazuya; Sasaki, Misao

    2013-08-01

    In the context of cosmological perturbation theory, we derive the second-order Boltzmann equation describing the evolution of the distribution function of radiation without a specific gauge choice. The essential steps in deriving the Boltzmann equation are revisited and extended given this more general framework: (i) the polarization of light is incorporated in this formalism by using a tensor-valued distribution function; (ii) the importance of a choice of the tetrad field to define the local inertial frame in the description of the distribution function is emphasized; (iii) we perform a separation between temperature and spectral distortion, both for the intensity and polarization for the first time; (iv) the gauge dependence of all perturbed quantities that enter the Boltzmann equation is derived, and this enables us to check the correctness of the perturbed Boltzmann equation by explicitly showing its gauge-invariance for both intensity and polarization. We finally discuss several implications of the gauge dependence for the observed temperature.

  16. Perceived Barriers to Scholarship and Research Among Pharmacy Practice Faculty: Survey Report from the AACP Scholarship/Research Faculty Development Task Force

    PubMed Central

    Robles, J. R.; Youmans, Sharon L.; Byrd, Debbie C.

    2009-01-01

    Objectives To identify problems that pharmacy practice faculty members face in pursuing scholarship and to develop and recommend solutions. Methods Department chairs were asked to forward a Web-based survey instrument to their faculty members. Global responses and responses stratified by demographics were summarized and analyzed. Results Between 312 and 340 faculty members answered questions that identified barriers to scholarship and recommended corrective strategies to these barriers. The most common barrier was insufficient time (57%), and the most common recommendation was for help to “identify a research question and how to answer it.” Sixty percent reported that scholarship was required for advancement but only 32% thought scholarship should be required. Forty-one percent reported that the importance of scholarship is overemphasized. Conclusions These survey results provide guidance to improve the quantity and quality of scholarship for faculty members who wish to pursue scholarship, although many of the survey respondents indicated they did not regard scholarship as a priority. PMID:19513155

  17. Qualitative and quantitative analysis of seven oligosaccharides in Morinda officinalis using double-development HPTLC and scanning densitometry.

    PubMed

    Zhou, Bin; Chang, Jun; Wang, Ping; Li, Jie; Cheng, Dan; Zheng, Peng-Wu

    2014-01-01

    The quality of Morindaofficinalis, which has been used as a Yang-tonic agent for a long time in China, can be evaluated. A double-development high performance thin layer chromatography (HPTLC) method has been established to simultaneously analyze quality and quantity of seven inulin-type oligosaccharides (DP=3-9) in Morindaofficinalis. The chromatography was performed on a silica gel 60 plate with the 7:5:2:1 proportion (v/v) of n-butanol-isopropanol-water-acetic acid for the first and second developments, respectively. The bands were visualized by the reaction with aniline-diphenylamine-phosphoric acid solution and analyzed by densitometric TLC at 540 nm. Quantification of seven oligosaccharides was achieved by densitometry at 540 nm. The investigated standard sugar had good linearity (R2>0.99) within test ranges. The amounts of seven oligosaccharides were calculated by the relative correction factor (RCF). Therefore, the developed TLC method could be used for quality control of Morindaofficinalis.

  18. Pileup per particle identification

    DOE PAGES

    Bertolini, Daniele; Harris, Philip; Low, Matthew; ...

    2014-10-09

    We propose a new method for pileup mitigation by implementing “pileup per particle identification” (PUPPI). For each particle we first define a local shape α which probes the collinear versus soft diffuse structure in the neighborhood of the particle. The former is indicative of particles originating from the hard scatter and the latter of particles originating from pileup interactions. The distribution of α for charged pileup, assumed as a proxy for all pileup, is used on an event-by-event basis to calculate a weight for each particle. The weights describe the degree to which particles are pileup-like and are used tomore » rescale their four-momenta, superseding the need for jet-based corrections. Furthermore, the algorithm flexibly allows combination with other, possibly experimental, probabilistic information associated with particles such as vertexing and timing performance. We demonstrate the algorithm improves over existing methods by looking at jet p T and jet mass. As a result, we also find an improvement on non-jet quantities like missing transverse energy.« less

  19. Plasma Modeling with Speed-Limited Particle-in-Cell Techniques

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Werner, G. R.; Cary, J. R.; Stoltz, P. H.

    2017-10-01

    Speed-limited particle-in-cell (SLPIC) modeling is a new particle simulation technique for modeling systems wherein numerical constraints, e.g. limitations on timestep size required for numerical stability, are significantly more restrictive than is needed to model slower kinetic processes of interest. SLPIC imposes artificial speed-limiting behavior on fast particles whose kinetics do not play meaningful roles in the system dynamics, thus enabling larger simulation timesteps and more rapid modeling of such plasma discharges. The use of SLPIC methods to model plasma sheath formation and the free expansion of plasma into vacuum will be demonstrated. Wallclock times for these simulations, relative to conventional PIC, are reduced by a factor of 2.5 for the plasma expansion problem and by over 6 for the sheath formation problem; additional speedup is likely possible. Physical quantities of interest are shown to be correct for these benchmark problems. Additional SLPIC applications will also be discussed. Supported by US DoE SBIR Phase I/II Award DE-SC0015762.

  20. On the Computation of Sound by Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu

    1997-01-01

    The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.

  1. Assessment of primer/template mismatch effects on real-time PCR amplification of target taxa for GMO quantification.

    PubMed

    Ghedira, Rim; Papazova, Nina; Vuylsteke, Marnik; Ruttink, Tom; Taverniers, Isabel; De Loose, Marc

    2009-10-28

    GMO quantification, based on real-time PCR, relies on the amplification of an event-specific transgene assay and a species-specific reference assay. The uniformity of the nucleotide sequences targeted by both assays across various transgenic varieties is an important prerequisite for correct quantification. Single nucleotide polymorphisms (SNPs) frequently occur in the maize genome and might lead to nucleotide variation in regions used to design primers and probes for reference assays. Further, they may affect the annealing of the primer to the template and reduce the efficiency of DNA amplification. We assessed the effect of a minor DNA template modification, such as a single base pair mismatch in the primer attachment site, on real-time PCR quantification. A model system was used based on the introduction of artificial mismatches between the forward primer and the DNA template in the reference assay targeting the maize starch synthase (SSIIb) gene. The results show that the presence of a mismatch between the primer and the DNA template causes partial to complete failure of the amplification of the initial DNA template depending on the type and location of the nucleotide mismatch. With this study, we show that the presence of a primer/template mismatch affects the estimated total DNA quantity to a varying degree.

  2. TIMEKEEPING IN THE AMERICAS.

    PubMed

    López, J M; Lombardi, M A

    Time and its measurement belong to the most fundamental core of physics, and many scientific and technological advances are directly or indirectly related to time measurements. Timekeeping is essential to everyday life, and thus is the most measured physical quantity in modern societies. Time can also be measured with less uncertainty and more resolution than any other physical quantity. The measurement of time is of the utmost importance for many applications, including: global navigation satellite systems, communications networks, electric power generation, astronomy, electronic commerce, and national defense and security. This paper discusses how time is kept, coordinated, and disseminated in the Americas.

  3. Timekeeping in the Americas

    NASA Astrophysics Data System (ADS)

    López, J. M.; Lombardi, M. A.

    2015-10-01

    Time and its measurement belong to the most fundamental core of physics, and many scientific and technological advances are directly or indirectly related to time measurements. Timekeeping is essential to everyday life, and thus is the most measured physical quantity in modern societies. Time can also be measured with less uncertainty and more resolution than any other physical quantity. The measurement of time is of the utmost importance for many applications, including: global navigation satellite systems, communications networks, electric power generation, astronomy, electronic commerce, and national defense and security. This paper discusses how time is kept, coordinated, and disseminated in the Americas.

  4. TIMEKEEPING IN THE AMERICAS

    PubMed Central

    López, J. M.; Lombardi, M. A.

    2016-01-01

    Time and its measurement belong to the most fundamental core of physics, and many scientific and technological advances are directly or indirectly related to time measurements. Timekeeping is essential to everyday life, and thus is the most measured physical quantity in modern societies. Time can also be measured with less uncertainty and more resolution than any other physical quantity. The measurement of time is of the utmost importance for many applications, including: global navigation satellite systems, communications networks, electric power generation, astronomy, electronic commerce, and national defense and security. This paper discusses how time is kept, coordinated, and disseminated in the Americas. PMID:26973371

  5. Optimal order policy in response to announced price increase for deteriorating items with limited special order quantity

    NASA Astrophysics Data System (ADS)

    Ouyang, Liang-Yuh; Wu, Kun-Shan; Yang, Chih-Te; Yen, Hsiu-Feng

    2016-02-01

    When a supplier announces an impending price increase due to take effect at a certain time in the future, it is important for each retailer to decide whether to purchase additional stock to take advantage of the present lower price. This study explores the possible effects of price increases on a retailer's replenishment policy when the special order quantity is limited and the rate of deterioration of the goods is assumed to be constant. The two situations discussed in this study are as follows: (1) when the special order time coincides with the retailer's replenishment time and (2) when the special order time occurs during the retailer's sales period. By analysing the total cost savings between special and regular orders during the depletion time of the special order quantity, the optimal order policy for each situation can be determined. We provide several numerical examples to illustrate the theories in practice. Additionally, we conduct a sensitivity analysis on the optimal solution with respect to the main parameters.

  6. Polynomial complexity despite the fermionic sign

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.

    2017-04-01

    It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.

  7. Correction of ultrasonic wave aberration with a time delay and amplitude filter.

    PubMed

    Måsøy, Svein-Erik; Johansen, Tonni F; Angelsen, Bjørn

    2003-04-01

    Two-dimensional simulations with propagation through two different heterogeneous human body wall models have been performed to analyze different correction filters for ultrasonic wave aberration due to forward wave propagation. The different models each produce most of the characteristic aberration effects such as phase aberration, relatively strong amplitude aberration, and waveform deformation. Simulations of wave propagation from a point source in the focus (60 mm) of a 20 mm transducer through the body wall models were performed. Center frequency of the pulse was 2.5 MHz. Corrections of the aberrations introduced by the two body wall models were evaluated with reference to the corrections obtained with the optimal filter: a generalized frequency-dependent phase and amplitude correction filter [Angelsen, Ultrasonic Imaging (Emantec, Norway, 2000), Vol. II]. Two correction filters were applied, a time delay filter, and a time delay and amplitude filter. Results showed that correction with a time delay filter produced substantial reduction of the aberration in both cases. A time delay and amplitude correction filter performed even better in both cases, and gave correction close to the ideal situation (no aberration). The results also indicated that the effect of the correction was very sensitive to the accuracy of the arrival time fluctuations estimate, i.e., the time delay correction filter.

  8. DFTBaby: A software package for non-adiabatic molecular dynamics simulations based on long-range corrected tight-binding TD-DFT(B)

    NASA Astrophysics Data System (ADS)

    Humeniuk, Alexander; Mitrić, Roland

    2017-12-01

    A software package, called DFTBaby, is published, which provides the electronic structure needed for running non-adiabatic molecular dynamics simulations at the level of tight-binding DFT. A long-range correction is incorporated to avoid spurious charge transfer states. Excited state energies, their analytic gradients and scalar non-adiabatic couplings are computed using tight-binding TD-DFT. These quantities are fed into a molecular dynamics code, which integrates Newton's equations of motion for the nuclei together with the electronic Schrödinger equation. Non-adiabatic effects are included by surface hopping. As an example, the program is applied to the optimization of excited states and non-adiabatic dynamics of polyfluorene. The python and Fortran source code is available at http://www.dftbaby.chemie.uni-wuerzburg.de.

  9. Memory for Multiple Cache Locations and Prey Quantities in a Food-Hoarding Songbird

    PubMed Central

    Armstrong, Nicola; Garland, Alexis; Burns, K. C.

    2012-01-01

    Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes), a food-hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with (1) the number of cache sites containing prey rewards and (2) the length of time separating cache creation and retrieval (retention interval). Results showed that subjects generally performed above-chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3, and 4 cache sites from between 1, 10, and 60 s. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to 1 min long retention intervals without training. PMID:23293622

  10. Memory for multiple cache locations and prey quantities in a food-hoarding songbird.

    PubMed

    Armstrong, Nicola; Garland, Alexis; Burns, K C

    2012-01-01

    Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes), a food-hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with (1) the number of cache sites containing prey rewards and (2) the length of time separating cache creation and retrieval (retention interval). Results showed that subjects generally performed above-chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3, and 4 cache sites from between 1, 10, and 60 s. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to 1 min long retention intervals without training.

  11. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also presentmore » the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.« less

  12. Analysis of the neutron time-of-flight spectra from inertial confinement fusion experiments

    NASA Astrophysics Data System (ADS)

    Hatarik, R.; Sayre, D. B.; Caggiano, J. A.; Phillips, T.; Eckart, M. J.; Bond, E. J.; Cerjan, C.; Grim, G. P.; Hartouni, E. P.; Knauer, J. P.; Mcnaney, J. M.; Munro, D. H.

    2015-11-01

    Neutron time-of-flight diagnostics have long been used to characterize the neutron spectrum produced by inertial confinement fusion experiments. The primary diagnostic goals are to extract the d + t → n + α (DT) and d + d → n + 3He (DD) neutron yields and peak widths, and the amount DT scattering relative to its unscattered yield, also known as the down-scatter ratio (DSR). These quantities are used to infer yield weighted plasma conditions, such as ion temperature (Tion) and cold fuel areal density. We report on novel methodologies used to determine neutron yield, apparent Tion, and DSR. These methods invoke a single temperature, static fluid model to describe the neutron peaks from DD and DT reactions and a spline description of the DT spectrum to determine the DSR. Both measurements are performed using a forward modeling technique that includes corrections for line-of-sight attenuation and impulse response of the detection system. These methods produce typical uncertainties for DT Tion of 250 eV, 7% for DSR, and 9% for the DT neutron yield. For the DD values, the uncertainties are 290 eV for Tion and 10% for the neutron yield.

  13. Dual fuel injection piggyback controller system

    NASA Astrophysics Data System (ADS)

    Muji, Siti Zarina Mohd.; Hassanal, Muhammad Amirul Hafeez; Lee, Chua King; Fawzi, Mas; Zulkifli, Fathul Hakim

    2017-09-01

    Dual-fuel injection is an effort to reduce the dependency on diesel and gasoline fuel. Generally, there are two approaches to implement the dual-fuel injection in car system. The first approach is changing the whole injector of the car engine, the consequence is excessive high cost. Alternatively, it also can be achieved by manipulating the system's control signal especially the Electronic Control Unit (ECU) signal. Hence, the study focuses to develop a dual injection timing controller system that likely adopted to control injection time and quantity of compressed natural gas (CNG) and diesel fuel. In this system, Raspberry Pi 3 reacts as main controller unit to receive ECU signal, analyze it and then manipulate its duty cycle to be fed into the Electronic Driver Unit (EDU). The manipulation has changed the duty cycle to two pulses instead of single pulse. A particular pulse mainly used to control injection of diesel fuel and another pulse controls injection of Compressed Natural Gas (CNG). The test indicated promising results that the system can be implemented in the car as piggyback system. This article, which was originally published online on 14 September 2017, contained an error in the acknowledgment section. The corrected acknowledgment appears in the Corrigendum attached to the pdf.

  14. Orders of Magnitude Extension of the Effective Dynamic Range of TDC-Based TOFMS Data Through Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ipsen, Andreas; Ebbels, Timothy M. D.

    2014-10-01

    In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.

  15. Measuring the degradation level of polymer films subjected to partial discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozzo, R.; Gemme, C.; Guastavino, F.

    1996-12-31

    Polymer films have been subjected to partial discharge (PD) aging. It is shown that statistical quantities derived from partial discharges patterns can be related to test conditions, film characteristics and degradation level. PDs have been measured by means of a digital system. Several resulting PD patterns have been elaborated and about 50 derived and statistical quantities have been obtained for each pattern. The effects of the test conditions on the derived quantities has been studied with relevance to the following items: To recognize the kind of film under test; to correlate the value of quantities with the degradation level ofmore » the film (i.e., to focus at the quantities which change with time); to find a link between the quantities values and the test ambient conditions (i.e., relative humidity); to determine the influence of the film thickness; and to evidence the effect of the voltage level.« less

  16. Intrinsic measures of field entropy in cosmological particle creation

    NASA Astrophysics Data System (ADS)

    Hu, B. L.; Pavon, D.

    1986-11-01

    Using the properties of quantum parametric oscillators, two quantities are identified which increase monotonically in time in the process of parametric amplification. The use of these quantities as possible measures of entropy generation in vacuum cosmological particle creation is suggested. These quantities which are of complementary nature are both related to the number of particles spontaneously created. Permanent address: Departamento de Termologia, Facultad de Ciencias, Universidad Autonoma de Barcelona, Ballaterra, Barcelona, Spain.

  17. Climate change and peak demand for electricity: Evaluating policies for reducing peak demand under different climate change scenarios

    NASA Astrophysics Data System (ADS)

    Anthony, Abigail Walker

    This research focuses on the relative advantages and disadvantages of using price-based and quantity-based controls for electricity markets. It also presents a detailed analysis of one specific approach to quantity based controls: the SmartAC program implemented in Stockton, California. Finally, the research forecasts electricity demand under various climate scenarios, and estimates potential cost savings that could result from a direct quantity control program over the next 50 years in each scenario. The traditional approach to dealing with the problem of peak demand for electricity is to invest in a large stock of excess capital that is rarely used, thereby greatly increasing production costs. Because this approach has proved so expensive, there has been a focus on identifying alternative approaches for dealing with peak demand problems. This research focuses on two approaches: price based approaches, such as real time pricing, and quantity based approaches, whereby the utility directly controls at least some elements of electricity used by consumers. This research suggests that well-designed policies for reducing peak demand might include both price and quantity controls. In theory, sufficiently high peak prices occurring during periods of peak demand and/or low supply can cause the quantity of electricity demanded to decline until demand is in balance with system capacity, potentially reducing the total amount of generation capacity needed to meet demand and helping meet electricity demand at the lowest cost. However, consumers need to be well informed about real-time prices for the pricing strategy to work as well as theory suggests. While this might be an appropriate assumption for large industrial and commercial users who have potentially large economic incentives, there is not yet enough research on whether households will fully understand and respond to real-time prices. Thus, while real-time pricing can be an effective tool for addressing the peak load problems, pricing approaches are not well suited to ensure system reliability. This research shows that direct quantity controls are better suited for avoiding catastrophic failure that results when demand exceeds supply capacity.

  18. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    PubMed

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  19. Sensitivity of resistive and Hall measurements to local inhomogeneities: Finite-field, intensity, and area corrections

    NASA Astrophysics Data System (ADS)

    Koon, Daniel W.; Wang, Fei; Petersen, Dirch Hjorth; Hansen, Ole

    2014-10-01

    We derive exact, analytic expressions for the sensitivity of sheet resistance and Hall sheet resistance measurements to local inhomogeneities for the cases of nonzero magnetic fields, strong perturbations, and perturbations over a finite area, extending our earlier results on weak perturbations. We express these sensitivities for conductance tensor components and for other charge transport quantities. Both resistive and Hall sensitivities, for a van der Pauw specimen in a finite magnetic field, are a superposition of the zero-field sensitivities to both sheet resistance and Hall sheet resistance. Strong perturbations produce a nonlinear correction term that depends on the strength of the inhomogeneity. Solution of the specific case of a finite-sized circular inhomogeneity coaxial with a circular specimen suggests a first-order correction for the general case. Our results are confirmed by computer simulations on both a linear four-point probe array on a large circular disc and a van der Pauw square geometry. Furthermore, the results also agree well with Náhlík et al. published experimental results for physical holes in a circular copper foil disc.

  20. Estimation of gloss from rough surface parameters

    NASA Astrophysics Data System (ADS)

    Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin

    2005-12-01

    Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.

  1. Measurement system

    NASA Technical Reports Server (NTRS)

    Turner, J. W. (Inventor)

    1973-01-01

    A measurement system is described for providing an indication of a varying physical quantity represented by or converted to a variable frequency signal. Timing pulses are obtained marking the duration of a fixed number, or set, of cycles of the sampled signal and these timing pulses are employed to control the period of counting of cycles of a higher fixed and known frequency source. The counts of cycles obtained from the fixed frequency source provide a precise measurement of the average frequency of each set of cycles sampled, and thus successive discrete values of the quantity being measured. The frequency of the known frequency source is made such that each measurement is presented as a direct digital representation of the quantity measured.

  2. Capturing the Large Scale Behavior of Many Particle Systems Through Coarse-Graining

    NASA Astrophysics Data System (ADS)

    Punshon-Smith, Samuel

    This dissertation is concerned with two areas of investigation: the first is understanding the mathematical structures behind the emergence of macroscopic laws and the effects of small scales fluctuations, the second involves the rigorous mathematical study of such laws and related questions of well-posedness. To address these areas of investigation the dissertation involves two parts: Part I concerns the theory of coarse-graining of many particle systems. We first investigate the mathematical structure behind the Mori-Zwanzig (projection operator) formalism by introducing two perturbative approaches to coarse-graining of systems that have an explicit scale separation. One concerns systems with little dissipation, while the other concerns systems with strong dissipation. In both settings we obtain an asymptotic series of `corrections' to the limiting description which are small with respect to the scaling parameter, these corrections represent the effects of small scales. We determine that only certain approximations give rise to dissipative effects in the resulting evolution. Next we apply this framework to the problem of coarse-graining the locally conserved quantities of a classical Hamiltonian system. By lumping conserved quantities into a collection of mesoscopic cells, we obtain, through a series of approximations, a stochastic particle system that resembles a discretization of the non-linear equations of fluctuating hydrodynamics. We study this system in the case that the transport coefficients are constant and prove well-posedness of the stochastic dynamics. Part II concerns the mathematical description of models where the underlying characteristics are stochastic. Such equations can model, for instance, the dynamics of a passive scalar in a random (turbulent) velocity field or the statistical behavior of a collection of particles subject to random environmental forces. First, we study general well-posedness properties of stochastic transport equation with rough diffusion coefficients. Our main result is strong existence and uniqueness under certain regularity conditions on the coefficients, and uses the theory of renormalized solutions of transport equations adapted to the stochastic setting. Next, in a work undertaken with collaborator Scott-Smith we study the Boltzmann equation with a stochastic forcing. The noise describing the forcing is white in time and colored in space and describes the effects of random environmental forces on a rarefied gas undergoing instantaneous, binary collisions. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. Tightness of the appropriate quantities is proved by an extension of the Skorohod theorem to non-metric spaces.

  3. SMOS+RAINFALL: Evaluating the ability of different methodologies to improve rainfall estimations using soil moisture data from SMOS

    NASA Astrophysics Data System (ADS)

    Pellarin, Thierry; Brocca, Luca; Crow, Wade; Kerr, Yann; Massari, Christian; Román-Cascón, Carlos; Fernández, Diego

    2017-04-01

    Recent studies have demonstrated the usefulness of soil moisture retrieved from satellite for improving rainfall estimations of satellite based precipitation products (SBPP). The real-time version of these products are known to be biased from the real precipitation observed at the ground. Therefore, the information contained in soil moisture can be used to correct the inaccuracy and uncertainty of these products, since the value and behavior of this soil variable preserve the information of a rain event even for several days. In this work, we take advantage of the soil moisture data from the Soil Moisture and Ocean Salinity (SMOS) satellite, which provides information with a quite appropriate temporal and spatial resolution for correcting rainfall events. Specifically, we test and compare the ability of three different methodologies for this aim: 1) SM2RAIN, which directly relate changes in soil moisture to rainfall quantities; 2) The LMAA methodology, which is based on the assimilation of soil moisture in two models of different complexity (see EGU2017-5324 in this same session); 3) The SMART method, based on the assimilation of soil moisture in a simple hydrological model with a different assimilation/modelling technique. The results are tested for 6 years over 10 sites around the world with different features (land surface, rainfall climatology, orography complexity, etc.). These preliminary and promising results are shown here for the first time to the scientific community, as also the observed limitations of the different methodologies. Specific remarks on the technical configurations, filtering/smoothing of SMOS soil moisture or re-scaling techniques are also provided from the results of different sensitivity experiments.

  4. The Orbits and Masses of Pluto's Satellites

    NASA Astrophysics Data System (ADS)

    Jacobson, Robert A.; Brozovic, M.

    2012-10-01

    We have fit numerically integrated orbits of Pluto's satellites, Charon, Nix, Hydra, and S/2011 (134340) 1, to an extensive set of astrometric, mutual event, and stellar occultation observations over the time interval April 1965 to July 2011. We did not include the newly discovered satellite S/2012 (134340) 1 because its observation set is insufficient to constrain a numerically integrated orbit. The data set contains all of the HST observations of Charon relative to Pluto which have been corrected for the Pluto center-of-figure center-of-light (COF) offset due to the Pluto albedo variations (Buie et al. 2012 AJ submitted). Buie et al. (2010 AJ 139, 1117 and 1128) discuss the development of the albedo model and the COF offset. We applied COF offset corrections to the remainder of the Pluto relative observations where applicable. The dual stellar occultations in 2008 and 2011 provided precise Pluto_Charon relative positions. We obtain a well determined value for the Pluto system mass, however, the lack of orbital resonances in the system makes it difficult to determine the satellite masses. The primary source of information for the Charon mass is a small quantity of absolute position measurements which are sensitive to the independent motions of Pluto and Charon about the system barycenter. The long term dynamical interaction among the satellites yields a weak determination of Hydra's mass; the masses of the other two satellites are found to be small but indeterminate. We have delivered ephemerides based on our integrated orbits to the New Horizons project along with their expected uncertainties at the time of the New Horizons encounter with the Pluto system. Acknowledgments: The research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

  5. A Comparison Between Modeled and Measured Clear-Sky Radiative Shortwave Fluxes in Arctic Environments, with Special Emphasis on Diffuse Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnard, James C.; Flynn, Donna M.

    2002-10-08

    The ability of the SBDART radiative transfer model to predict clear-sky diffuse and direct normal broadband shortwave irradiances is investigated. Model calculations of these quantities are compared with data from the Atmospheric Radiation Measurement (ARM) program’s Southern Great Plains (SGP) and North Slope of Alaska (NSA) sites. The model tends to consistently underestimate the direct normal irradiances at both sites by about 1%. In regards to clear-sky diffuse irradiance, the model overestimates this quantity at the SGP site in a manner similar to what has been observed in other studies (Halthore and Schwartz, 2000). The difference between the diffuse SBDARTmore » calculations and Halthore and Schwartz’s MODTRAN calculations is very small, thus demonstrating that SBDART performs similarly to MODTRAN. SBDART is then applied to the NSA site, and here it is found that the discrepancy between the model calculations and corrected diffuse measurements (corrected for daytime offsets, Dutton et al., 2001) is 0.4 W/m2 when averaged over the 12 cases considered here. Two cases of diffuse measurements from a shaded “black and white” pyranometer are also compared with the calculations and the discrepancy is again minimal. Thus, it appears as if the “diffuse discrepancy” that exists at the SGP site does not exist at the NSA sites. We cannot yet explain why the model predicts diffuse radiation well at one site but not at the other.« less

  6. Cytological preparations for molecular analysis: A review of technical procedures, advantages and limitations for referring samples for testing.

    PubMed

    da Cunha Santos, G; Saieg, M A; Troncone, G; Zeppa, P

    2018-04-01

    Minimally invasive procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) must yield not only good quality and quantity of material for morphological assessment, but also an adequate sample for analysis of molecular markers to guide patients to appropriate targeted therapies. In this context, cytopathologists worldwide should be familiar with minimum requirements for refereeing cytological samples for testing. The present manuscript is a review with comprehensive description of the content of the workshop entitled Cytological preparations for molecular analysis: pre-analytical issues for EBUS TBNA, presented at the 40th European Congress of Cytopathology in Liverpool, UK. The present review emphasises the advantages and limitations of different types of cytology substrates used for molecular analysis such as archival smears, liquid-based preparations, archival cytospin preparations and FTA (Flinders Technology Associates) cards, as well as their technical requirements/features. These various types of cytological specimens can be successfully used for an extensive array of molecular studies, but the quality and quantity of extracted nucleic acids rely directly on adequate pre-analytical assessment of those samples. In this setting, cytopathologists must not only be familiar with the different types of specimens and associated technical procedures, but also correctly handle the material provided by minimally invasive procedures, ensuring that there is sufficient amount of material for a precise diagnosis and correct management of the patient through personalised care. © 2018 John Wiley & Sons Ltd.

  7. Naval Observatory Vector Astrometry Software (NOVAS) Version 3.1, Introducing a Python Edition

    NASA Astrophysics Data System (ADS)

    Barron, Eric G.; Kaplan, G. H.; Bangert, J.; Bartlett, J. L.; Puatua, W.; Harris, W.; Barrett, P.

    2011-01-01

    The Naval Observatory Vector Astrometry Software (NOVAS) is a source-code library that provides common astrometric quantities and transformations. NOVAS calculations are accurate at the sub-milliarcsecond level. The library can supply, in one or two subroutine or function calls, the instantaneous celestial position of any star or planet in a variety of coordinate systems. NOVAS also provides access to all of the building blocks that go into such computations. NOVAS Version 3.1 introduces a Python edition alongside the Fortran and C editions. The Python edition uses the computational code from the C edition and, currently, mimics the function calls of the C edition. Future versions will expand the functionality of the Python edition to harness the object-oriented nature of the Python language, and will implement the ability to handle large quantities of objects or observers using the array functionality in NumPy (a third-party scientific package for Python). NOVAS 3.1 also adds a module to transform GCRS vectors to the ITRS; the ITRS to GCRS transformation was already provided in NOVAS 3.0. The module that corrects an ITRS vector for polar motion has been modified to undo that correction upon demand. In the C edition, the ephemeris-access functions have been revised for use on 64-bit systems and for improved performance in general. NOVAS, including documentation, is available from the USNO website (http://www.usno.navy.mil/USNO/astronomical-applications/software-products/novas).

  8. The Role of Perceived Injunctive Alcohol Norms in Adolescent Drinking Behavior

    PubMed Central

    Pedersen, Eric R.; Osilla, Karen Chan; Miles, Jeremy N.V.; Tucker, Joan S.; Ewing, Brett A.; Shih, Regina A.; D’Amico, Elizabeth J.

    2016-01-01

    Peers have a major influence on youth during adolescence, and perceptions about peer alcohol use (perceived norms) are often associated with personal drinking behavior among youth. Most of the research on perceived norms among adolescents focuses on perceived descriptive norms only, or perceptions about peers’ behavior, and correcting these perceptions are a major focus of many prevention programs with adolescents. In contrast, perceived injunctive norms, which are personal perceptions about peers’ attitudes regarding the acceptability of behaviors, have been minimally examined in the adolescent drinking literature. Yet correcting perceptions about these perceived peer attitudes may be an important component to include in prevention programs with youth. Using a sample of 2,493 high school-aged youth (mean age = 17.3), we assessed drinking behavior (past year use; past month frequency, quantity, and peak drinks), drinking consequences, and perceived descriptive and injunctive norms to examine the relationships of perceived injunctive and descriptive norms on adolescent drinking behavior. Findings indicated that although perceived descriptive norms were associated with some drinking outcomes (past year use; past month frequency; past month quantity; peak drinks), perceived injunctive norms were associated with all drinking outcomes, including outcomes of consequences, even after controlling for perceived descriptive norms. Findings suggest that consideration of perceived injunctive norms may be important in models of adolescent drinking. Prevention programs that do not include injunctive norms feedback may miss an important opportunity to enhance effectiveness of such prevention programs targeting adolescent alcohol use. PMID:27978424

  9. Food and nutritional security requires adequate protein as well as energy, delivered from whole-year crop production.

    PubMed

    Coles, Graeme D; Wratten, Stephen D; Porter, John R

    2016-01-01

    Human food security requires the production of sufficient quantities of both high-quality protein and dietary energy. In a series of case-studies from New Zealand, we show that while production of food ingredients from crops on arable land can meet human dietary energy requirements effectively, requirements for high-quality protein are met more efficiently by animal production from such land. We present a model that can be used to assess dietary energy and quality-corrected protein production from various crop and crop/animal production systems, and demonstrate its utility. We extend our analysis with an accompanying economic analysis of commercially-available, pre-prepared or simply-cooked foods that can be produced from our case-study crop and animal products. We calculate the per-person, per-day cost of both quality-corrected protein and dietary energy as provided in the processed foods. We conclude that mixed dairy/cropping systems provide the greatest quantity of high-quality protein per unit price to the consumer, have the highest food energy production and can support the dietary requirements of the highest number of people, when assessed as all-year-round production systems. Global food and nutritional security will largely be an outcome of national or regional agroeconomies addressing their own food needs. We hope that our model will be used for similar analyses of food production systems in other countries, agroecological zones and economies.

  10. Observational Constraints on Models of the Universe with Time Variable Gravitational and Cosmological Constants Along MOG

    NASA Astrophysics Data System (ADS)

    Khurshudyan, M.; Mazhari, N. S.; Momeni, D.; Myrzakulov, R.; Raza, M.

    2015-02-01

    The subject of this paper is to investigate the weak regime covariant scalar-tensor-vector gravity (STVG) theory, known as the MOdified gravity (MOG) theory of gravity. First, we show that the MOG in the absence of scalar fields is converted into Λ( t), G( t) models. Time evolution of the cosmological parameters for a family of viable models have been investigated. Numerical results with the cosmological data have been adjusted. We've introduced a model for dark energy (DE) density and cosmological constant which involves first order derivatives of Hubble parameter. To extend this model, correction terms including the gravitational constant are added. In our scenario, the cosmological constant is a function of time. To complete the model, interaction terms between dark energy and dark matter (DM) manually entered in phenomenological form. Instead of using the dust model for DM, we have proposed DM equivalent to a barotropic fluid. Time evolution of DM is a function of other cosmological parameters. Using sophisticated algorithms, the behavior of various quantities including the densities, Hubble parameter, etc. have been investigated graphically. The statefinder parameters have been used for the classification of DE models. Consistency of the numerical results with experimental data of S n e I a + B A O + C M B are studied by numerical analysis with high accuracy.

  11. Intrafraction Prostate Translations and Rotations During Hypofractionated Robotic Radiation Surgery: Dosimetric Impact of Correction Strategies and Margins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl; Valli, Lorella; Alma Mater Studiorum, Department of Physics and Astronomy, Bologna University, Bologna

    Purpose: To investigate the dosimetric impact of intrafraction prostate motion and the effect of robot correction strategies for hypofractionated CyberKnife treatments with a simultaneously integrated boost. Methods and Materials: A total of 548 real-time prostate motion tracks from 17 patients were available for dosimetric simulations of CyberKnife treatments, in which various correction strategies were included. Fixed time intervals between imaging/correction (15, 60, 180, and 360 seconds) were simulated, as well as adaptive timing (ie, the time interval reduced from 60 to 15 seconds in case prostate motion exceeded 3 mm or 2° in consecutive images). The simulated extent of robot corrections was alsomore » varied: no corrections, translational corrections only, and translational corrections combined with rotational corrections up to 5°, 10°, and perfect rotational correction. The correction strategies were evaluated for treatment plans with a 0-mm or 3-mm margin around the clinical target volume (CTV). We recorded CTV coverage (V{sub 100%}) and dose-volume parameters of the peripheral zone (boost), rectum, bladder, and urethra. Results: Planned dose parameters were increasingly preserved with larger extents of robot corrections. A time interval between corrections of 60 to 180 seconds provided optimal preservation of CTV coverage. To achieve 98% CTV coverage in 98% of the treatments, translational and rotational corrections up to 10° were required for the 0-mm margin plans, whereas translational and rotational corrections up to 5° were required for the 3-mm margin plans. Rectum and bladder were spared considerably better in the 0-mm margin plans. Adaptive timing did not improve delivered dose. Conclusions: Intrafraction prostate motion substantially affected the delivered dose but was compensated for effectively by robot corrections using a time interval of 60 to 180 seconds. A 0-mm margin required larger extents of additional rotational corrections than a 3-mm margin but resulted in lower doses to rectum and bladder.« less

  12. The Dynamics of Pheromone Gland Synthesis and Release: a Paradigm Shift for Understanding Sex Pheromone Quantity in Female Moths.

    PubMed

    Foster, Stephen P; Anderson, Karin G; Casas, Jérôme

    2018-05-10

    Moths are exemplars of chemical communication, especially with regard to specificity and the minute amounts they use. Yet, little is known about how females manage synthesis and storage of pheromone to maintain release rates attractive to conspecific males and why such small amounts are used. We developed, for the first time, a quantitative model, based on an extensive empirical data set, describing the dynamical relationship among synthesis, storage (titer) and release of pheromone over time in a moth (Heliothis virescens). The model is compartmental, with one major state variable (titer), one time-varying (synthesis), and two constant (catabolism and release) rates. The model was a good fit, suggesting it accounted for the major processes. Overall, we found the relatively small amounts of pheromone stored and released were largely a function of high catabolism rather than a low rate of synthesis. A paradigm shift may be necessary to understand the low amounts released by female moths, away from the small quantities synthesized to the (relatively) large amounts catabolized. Future research on pheromone quantity should focus on structural and physicochemical processes that limit storage and release rate quantities. To our knowledge, this is the first time that pheromone gland function has been modeled for any animal.

  13. Effects of quantity, quality, and contact time of dissolved organic matter on bioconcentration of benzo[a]pyrene in the nematode Caenorhabditis elegans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haitzer, M.; Hoess, S.; Burnison, B.K.

    1999-03-01

    Quantity and quality of dissolved organic matter (DOM) and the time allowed for DOM to interact with organic contaminants can influence their bioavailability. The authors studied the effect of natural aquatic DOM that had been in contact with benzo[a]pyrene (B[a]P) for 1 to 12 d on the bioconcentration of B[a]P in the nematode Caenorhabditis elegans. Dissolved organic matter quality and quantity was varied by using DOM from three different sources, each in three different concentrations. A model, based on the assumption that only freely dissolved B[a]P is bioavailable, was employed to estimate biologically determined partition coefficients [K{sub p}(biol.)]. Expressing themore » data for each combination of DOM source and contact time in a single K{sub p} (biol.) value allowed a direct comparison of the effects of different DOM qualities and contact times. The results show that the effect of DOM from a specific source was dependent on DOM quantity, but they also observed a distinct effect of DOM quality (represented by different sampling locations) on the bioconcentration of B[a]P. Contact time had no significant influence for the effects of two DOM sources on the bioconcentration of B[a]P. However, the third DOM source was significantly more effective with increased contact time, leading to lower B[a]P bioconcentration in the nematodes.« less

  14. Assessing Near-surface Heat, Water Vapor and Carbon Dioxide Exchange Over a Coastal Salt-marsh

    NASA Astrophysics Data System (ADS)

    Bogoev, I.; O'Halloran, T. L.; LeMoine, J.

    2017-12-01

    Coastal ecosystems play an important role in mitigating the effects of climate change by storing significant quantities of carbon. A growing number of studies suggest that vegetated estuarine habitats, specifically salt marshes, have high long-term rates of carbon sequestration, perhaps even higher than mature tropical and temperate forests. Large amounts of carbon, accumulated over thousands of years, are stored in the plant materials and sediment. Improved understanding of the factors that control energy and carbon exchange is needed to better guide restoration and conservation management practices. To that end, we recently established an observation system to study marsh-atmosphere interactions within the North Inlet-Winyah Bay National Estuarine Research Reserve. Near-surface fluxes of heat, water vapor (H2O) and carbon dioxide (CO2) were measured by an eddy-covariance system consisting of an aerodynamic open-path H2O / CO2 gas analyzer with a spatially integrated 3D sonic anemometer/thermometer (IRGASON). The IRGASON instrument provides co-located and highly synchronized, fast response H2O, CO2 and air- temperature measurements, which eliminates the need for spectral corrections associated with the separation between the sonic anemometer and the gas analyzer. This facilitates calculating the instantaneous CO2 molar mixing ratio relative to dry air. Fluxes computed from CO2 and H2O mixing ratios, which are conserved quantities, do not require post-processing corrections for air-density changes associated with temperature and water vapor fluctuations. These corrections are particularly important for CO2, because they could be even larger than the measured flux. Here we present the normalized frequency spectra of air temperature, water vapor and CO2, as well as their co-spectra with the co-located vertical wind. We also show mean daily cycles of sensible, latent and CO2 fluxes and analyze correlations with air/water temperature, wind speed and light availability.

  15. How Well Do Molecular and Pedigree Relatedness Correspond, in Populations with Diverse Mating Systems, and Various Types and Quantities of Molecular and Demographic Data?

    PubMed

    Kopps, Anna M; Kang, Jungkoo; Sherwin, William B; Palsbøll, Per J

    2015-06-30

    Kinship analyses are important pillars of ecological and conservation genetic studies with potentially far-reaching implications. There is a need for power analyses that address a range of possible relationships. Nevertheless, such analyses are rarely applied, and studies that use genetic-data-based-kinship inference often ignore the influence of intrinsic population characteristics. We investigated 11 questions regarding the correct classification rate of dyads to relatedness categories (relatedness category assignments; RCA) using an individual-based model with realistic life history parameters. We investigated the effects of the number of genetic markers; marker type (microsatellite, single nucleotide polymorphism SNP, or both); minor allele frequency; typing error; mating system; and the number of overlapping generations under different demographic conditions. We found that (i) an increasing number of genetic markers increased the correct classification rate of the RCA so that up to >80% first cousins can be correctly assigned; (ii) the minimum number of genetic markers required for assignments with 80 and 95% correct classifications differed between relatedness categories, mating systems, and the number of overlapping generations; (iii) the correct classification rate was improved by adding additional relatedness categories and age and mitochondrial DNA data; and (iv) a combination of microsatellite and single-nucleotide polymorphism data increased the correct classification rate if <800 SNP loci were available. This study shows how intrinsic population characteristics, such as mating system and the number of overlapping generations, life history traits, and genetic marker characteristics, can influence the correct classification rate of an RCA study. Therefore, species-specific power analyses are essential for empirical studies. Copyright © 2015 Kopps et al.

  16. SI units.

    PubMed

    Lehmann, H P

    1979-01-01

    The development of the International System of Units (Systeme International d'Unites--SE Units), based on seven fundamental quantities--length, mass, time, electric current, thermodynamic temperature, luminous intensity, and amount of substance is described. Units (coherent and noncoherent) for other measurable quantities that are derived from the seven basic quantities are reviewed. The rationale for the use of SE units in medicine, primarily as applied to clinical laboratory data, is discussed, and arguments are presented for the rigid adoption of SI units in medicine and for exceptions. Tables are given for the basic and derived SI units used in medicine and for conversion factors from the quantities and units in current use to those in SI units.

  17. Development and validation of a rebinner with rigid motion correction for the Siemens PET-MR scanner: Application to a large cohort of [11C]-PIB scans.

    PubMed

    Reilhac, Anthonin; Merida, Ines; Irace, Zacharie; Stephenson, Mary; Weekes, Ashley; Chen, Christopher; Totman, John; Townsend, David W; Fayad, Hadi; Costes, Nicolas

    2018-04-13

    Objective: Head motion occuring during brain PET studies leads to image blurring and to bias in measured local quantities. Our first objective was to implement an accurate list-mode-based rigid motion correction method for PET data acquired with the mMR synchronous Positron Emission Tomography/Magnetic Resonance (PET/MR) scanner. Our second objective was to optimize the correction for [ 11 C]-PIB scans using simulated and actual data with well-controlled motions. Results: An efficient list-mode based motion correction approach has been implemented, fully optimized and validated using simulated as well as actual PET data. The average spatial resolution loss induced by inaccuracies in motion parameter estimates as well as by the rebinning process was estimated to correspond to a 1 mm increase in Full Width Half Maximum (FWHM) with motion parameters estimated directly from the PET data with a temporal frequency of 20 secs. The results show that it can be safely applied to the [ 11 C]-PIB scans, allowing almost complete removal of motion induced artifacts.The application of the correction method on a large cohort of 11C-PIB scans led to the following observations: i) more than 21% of the scans were affected by a motion greater than 10 mm (39% for subjects with Mini-Mental State Examination -MMSE scores below 20) and ii), the correction led to quantitative changes in Alzheimer-specific cortical regions of up to 30%. Conclusion: The rebinner allows an accurate motion correction at a cost of minimal resolution reduction. The application of the correction to a large cohort of [ 11 C]-PIB scans confirmed the necessity to systematically correct for motion for quantitative results. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  18. An Algebraic Approach to Unital Quantities and their Measurement

    NASA Astrophysics Data System (ADS)

    Domotor, Zoltan; Batitsky, Vadim

    2016-06-01

    The goals of this paper fall into two closely related areas. First, we develop a formal framework for deterministic unital quantities in which measurement unitization is understood to be a built-in feature of quantities rather than a mere annotation of their numerical values with convenient units. We introduce this idea within the setting of certain ordered semigroups of physical-geometric states of classical physical systems. States are assumed to serve as truth makers of metrological statements about quantity values. A unital quantity is presented as an isomorphism from the target system's ordered semigroup of states to that of positive reals. This framework allows us to include various derived and variable quantities, encountered in engineering and the natural sciences. For illustration and ease of presentation, we use the classical notions of length, time, electric current and mean velocity as primordial examples. The most important application of the resulting unital quantity calculus is in dimensional analysis. Second, in evaluating measurement uncertainty due to the analog-to-digital conversion of the measured quantity's value into its measuring instrument's pointer quantity value, we employ an ordered semigroup framework of pointer states. Pointer states encode the measuring instrument's indiscernibility relation, manifested by not being able to distinguish the measured system's topologically proximal states. Once again, we focus mainly on the measurement of length and electric current quantities as our motivating examples. Our approach to quantities and their measurement is strictly state-based and algebraic in flavor, rather than that of a representationalist-style structure-preserving numerical assignment.

  19. Difference in quantity discrimination in dogs and wolves.

    PubMed

    Range, Friederike; Jenikejew, Julia; Schröder, Isabelle; Virányi, Zsófia

    2014-01-01

    Certain aspects of social life, such as engaging in intergroup conflicts, as well as challenges posed by the physical environment, may facilitate the evolution of quantity discrimination. In lack of excessive comparative data, one can only hypothesize about its evolutionary origins, but human-raised wolves performed well when they had to choose the larger of two sets of 1-4 food items that had been sequentially placed into two opaque cans. Since in such paradigms, the animals never see the entire content of either can, their decisions are thought to rely on mental representation of the two quantities rather than on some perceptual factors such as the overall volume or surface area of the two amounts. By equaling the time that it takes to enter each quantity into the cans or the number of items entered, one can further rule out the possibility that animals simply choose based on the amount of time needed to present the two quantities. While the wolves performed well even in such a control condition, dogs failed to choose the larger one of two invisible quantities in another study using a similar paradigm. Because this disparity could be explained by procedural differences, in the current study, we set out to test dogs that were raised and kept identically as the previously tested wolves using the same set-up and procedure. Our results confirm the former finding that dogs, in comparison to wolves, have inferior skills to represent quantities mentally. This seems to be in line with Frank's (1980) hypothesis suggesting that domestication altered the information processing of dogs. However, as discussed, also alternative explanations may exist.

  20. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  1. Real-time planning/replanning of ongoing operations in a crisis situation

    NASA Astrophysics Data System (ADS)

    Griffith, David A.; Smith, Gregory M.

    1997-02-01

    The ability to examine the planned position and movement of police vehicles, personnel, weapons, and status of police assets is an implied requirement in the conduct of police activities. Displays showing the time police vehicles leave on assignment, stops along their route, time of return to station, quantity of vehicles, types of weapons, radio frequencies, and other pertinent information could help in crisis situations. It would be especially helpful if it were easily accessible and simple to understand. Rome Laboratory developed a system for monitoring interrelated planned events and for changing these events to correct for deviations in the plan. The system is called force level execution (FLEX), and it displays information on timing charts, tables, and a geographic map background. The FLEX graphics enhance the military commander's ability to grasp the tactical situation which typically includes 2000 to 3000 air sorties per day. A sortie is a single flight of a single aircraft. Because the 'fog of war' causes unexpected events, status reports are needed, replanning options are generated, and new plans are issued to correct for these unexpected events. The authors believe there are law enforcement and other crisis situations that are analogous to some military scenarios. These may include state police operating over large geographical areas, coordination with county police operating over somewhat smaller areas, coordination with the county sheriff's office and city police, not only for criminal apprehension, but for disaster relief. Other participants in a crisis situation may include fire departments, ambulances, emergency medical vehicles, hospitals, rescue operations, etc. The position of police vehicles, foot patrolman, helicopters and emergency vehicles can all be superimposed upon a map background, with appropriate cultural features such as roads, rivers, bridges, state and country boundaries, etc. When police vehicles incorporate the global positioning system (GPS), an automated status display could potentially show the exact locations of these vehicles in real time. This paper shows how the Air Force is using this technology and how, in the author's opinion, FLEX might be adapted to law enforcement and disaster relief situations.

  2. Theoretical models for the regulation of DNA replication in fast-growing bacteria

    NASA Astrophysics Data System (ADS)

    Creutziger, Martin; Schmidt, Mischa; Lenz, Peter

    2012-09-01

    Growing in always changing environments, Escherichia coli cells are challenged by the task to coordinate growth and division. In particular, adaption of their growth program to the surrounding medium has to guarantee that the daughter cells obtain fully replicated chromosomes. Replication is therefore to be initiated at the right time, which is particularly challenging in media that support fast growth. Here, the mother cell initiates replication not only for the daughter but also for the granddaughter cells. This is possible only if replication occurs from several replication forks that all need to be correctly initiated. Despite considerable efforts during the last 40 years, regulation of this process is still unknown. Part of the difficulty arises from the fact that many details of the relevant molecular processes are not known. Here, we develop a novel theoretical strategy for dealing with this general problem: instead of analyzing a single model, we introduce a wide variety of 128 different models that make different assumptions about the unknown processes. By comparing the predictions of these models we are able to identify the key quantities that allow the experimental discrimination of the different models. Analysis of these quantities yields that out of the 128 models 94 are not consistent with available experimental data. From the remaining 34 models we are able to conclude that mass growth and DNA replication need either to be truly coupled, by coupling DNA replication initiation to the event of cell division, or to the amount of accumulated mass. Finally, we make suggestions for experiments to further reduce the number of possible regulation scenarios.

  3. An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging

    NASA Astrophysics Data System (ADS)

    Santhi, G.; Karthikeyan, K.

    2017-11-01

    In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.

  4. Performance and Quality Assessment of the Forthcoming Copernicus Marine Service Global Ocean Monitoring and Forecasting Real-Time System

    NASA Astrophysics Data System (ADS)

    Lellouche, J. M.; Le Galloudec, O.; Greiner, E.; Garric, G.; Regnier, C.; Drillet, Y.

    2016-02-01

    Mercator Ocean currently delivers in real-time daily services (weekly analyses and daily forecast) with a global 1/12° high resolution system. The model component is the NEMO platform driven at the surface by the IFS ECMWF atmospheric analyses and forecasts. Observations are assimilated by means of a reduced-order Kalman filter with a 3D multivariate modal decomposition of the forecast error. It includes an adaptive-error estimate and a localization algorithm. Along track altimeter data, satellite Sea Surface Temperature and in situ temperature and salinity vertical profiles are jointly assimilated to estimate the initial conditions for numerical ocean forecasting. A 3D-Var scheme provides a correction for the slowly-evolving large-scale biases in temperature and salinity.Since May 2015, Mercator Ocean opened the Copernicus Marine Service (CMS) and is in charge of the global ocean analyses and forecast, at eddy resolving resolution. In this context, R&D activities have been conducted at Mercator Ocean these last years in order to improve the real-time 1/12° global system for the next CMS version in 2016. The ocean/sea-ice model and the assimilation scheme benefit among others from the following improvements: large-scale and objective correction of atmospheric quantities with satellite data, new Mean Dynamic Topography taking into account the last version of GOCE geoid, new adaptive tuning of some observational errors, new Quality Control on the assimilated temperature and salinity vertical profiles based on dynamic height criteria, assimilation of satellite sea-ice concentration, new freshwater runoff from ice sheets melting …This presentation doesn't focus on the impact of each update, but rather on the overall behavior of the system integrating all updates. This assessment reports on the products quality improvements, highlighting the level of performance and the reliability of the new system.

  5. Measurement of the direct C P -violating parameter A C P in the decay D + → K - π + π +

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abazov, V. M.; Abbott, B.; Acharya, B. S.

    2014-12-01

    We measure the direct C P -violating parameter A C P for the decay of the charged charm meson, D + → K - π + π + (and charge conjugate), using the full 10.4 fb - 1more » sample of p p ¯ collisions at s = 1.96 TeV collected by the D0 detector at the Fermilab Tevatron collider. We extract the raw reconstructed charge asymmetry by fitting the invariant mass distributions for the sum and difference of charge-specific samples. This quantity is then corrected for detector-related asymmetries using data-driven methods and for possible physics asymmetries (from B → D processes) using input from Monte Carlo simulation. We measure A C P = [ - 0.16 ± 0.15 ( stat ) ± 0.09 ( syst ) ] % , which is consistent with zero, as expected from the standard model prediction of C P conservation, and is the most precise measurement of this quantity to date.« less

  6. Influence of the spectral power distribution of a LED on the illuminance responsivity of a photometer

    NASA Astrophysics Data System (ADS)

    Sametoglu, Ferhat

    2008-09-01

    The measurement accuracy in the photometric quantities measured through photometer head is determined by the value of the spectral mismatch correction factor ( c( St, Ss)), which is defined as a function of spectral power distribution of light sources, besides illuminance responsivity of the photometer head used. This factor is more important when photometric quantities of the light-emitting diode (LED) style optical sources, which radiate within relatively narrow spectral bands as compared with that of other optical sources, are being measured. Variations of the illuminance responsivities of various V( λ)-adopted photometer heads are discussed. High-power-colored LEDs, manufactured by Lumileds Lighting Co., were used as light sources and their relative spectral power distributions (RSPDs) were measured using a spectrometer-based optical setup. Dependences of the c( St, Ss) factors of three types of photometer heads ( f1'=1.4%, f1'=0.8% and f1'=0.5%) with wavelength and influences of the factors on the illuminance responsivities of photometer heads are presented.

  7. Reductions in muscle quality and quantity in CIDP patients assessed by magnetic resonance imaging.

    PubMed

    Gilmore, Kevin J; Doherty, Timothy J; Kimpinski, Kurt; Rice, Charles L

    2018-05-09

    Weakness in patients with chronic inflammatory demyelinating polyneuropathy (CIDP) may be caused by decreases in muscle quantity and quality, but these have not been explored. Twelve patients with CIDP (mean 61 years) and ten age- matched (mean 59 years) control subjects were assessed for ankle dorsiflexion strength, and two different MRI scans (T1 and T2) of leg musculature. Isometric strength was lower in CIDP patients by 36% compared with controls. Tibialis anterior muscle volumes of CIDP patients were smaller by ∼17% than controls, and non-contractile tissue volume was ∼58% greater in CIDP patients. When normalized to total muscle or corrected contractile volume, strength was ∼ 29% and ∼18% lower, respectively in CIDP patients DISCUSSION: These results provide insight into structural integrity of muscle contractile proteins and pathological changes to whole-muscle tissue composition that contribute to impaired muscle function in CIDP. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  8. Strong-Coupling Effects and Shear Viscosity in an Ultracold Fermi Gas

    NASA Astrophysics Data System (ADS)

    Kagamihara, D.; Ohashi, Y.

    2017-06-01

    We theoretically investigate the shear viscosity η , as well as the entropy density s, in the normal state of an ultracold Fermi gas. Including pairing fluctuations within the framework of a T-matrix approximation, we calculate these quantities in the Bardeen-Cooper-Schrieffer (BCS)-Bose-Einstein condensation (BEC) crossover region. We also evaluate η / s, to compare it with the lower bound of this ratio, conjectured by Kovtun, Son, and Starinets (KSS bound). In the weak-coupling BCS side, we show that the shear viscosity η is remarkably suppressed near the superfluid phase transition temperature Tc, due to the so-called pseudogap phenomenon. In the strong-coupling BEC side, we find that, within the neglect of the vertex corrections, one cannot correctly describe η . We also show that η / s decreases with increasing the interaction strength, to become very close to the KSS bound, \\hbar /4π kB, on the BEC side.

  9. Exact Derivation of a Finite-Size Scaling Law and Corrections to Scaling in the Geometric Galton-Watson Process

    PubMed Central

    Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc

    2016-01-01

    The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596

  10. Morphology supporting function: attenuation correction for SPECT/CT, PET/CT, and PET/MR imaging

    PubMed Central

    Lee, Tzu C.; Alessio, Adam M.; Miyaoka, Robert M.; Kinahan, Paul E.

    2017-01-01

    Both SPECT, and in particular PET, are unique in medical imaging for their high sensitivity and direct link to a physical quantity, i.e. radiotracer concentration. This gives PET and SPECT imaging unique capabilities for accurately monitoring disease activity for the purposes of clinical management or therapy development. However, to achieve a direct quantitative connection between the underlying radiotracer concentration and the reconstructed image values several confounding physical effects have to be estimated, notably photon attenuation and scatter. With the advent of dual-modality SPECT/CT, PET/CT, and PET/MR scanners, the complementary CT or MR image data can enable these corrections, although there are unique challenges for each combination. This review covers the basic physics underlying photon attenuation and scatter and summarizes technical considerations for multimodal imaging with regard to PET and SPECT quantification and methods to address the challenges for each multimodal combination. PMID:26576737

  11. Eukaryotic expression, purification and structure/function analysis of native, recombinant CRISP3 from human and mouse

    NASA Astrophysics Data System (ADS)

    Volpert, Marianna; Mangum, Jonathan E.; Jamsai, Duangporn; D'Sylva, Rebecca; O'Bryan, Moira K.; McIntyre, Peter

    2014-02-01

    While the Cysteine-Rich Secretory Proteins (CRISPs) have been broadly proposed as regulators of reproduction and immunity, physiological roles have yet to be established for individual members of this family. Past efforts to investigate their functions have been limited by the difficulty of purifying correctly folded CRISPs from bacterial expression systems, which yield low quantities of correctly folded protein containing the eight disulfide bonds that define the CRISP family. Here we report the expression and purification of native, glycosylated CRISP3 from human and mouse, expressed in HEK 293 cells and isolated using ion exchange and size exclusion chromatography. Functional authenticity was verified by substrate-affinity, native glycosylation characteristics and quaternary structure (monomer in solution). Validated protein was used in comparative structure/function studies to characterise sites and patterns of N-glycosylation in CRISP3, revealing interesting inter-species differences.

  12. Diagonal Born-Oppenheimer correction for coupled-cluster wave-functions

    NASA Astrophysics Data System (ADS)

    Shamasundar, K. R.

    2018-06-01

    We examine how geometry-dependent normalisation freedom of electronic wave-functions affects extraction of a meaningful diagonal Born-Oppenheimer correction (DBOC) to the ground-state Born-Oppenheimer potential energy surface (PES). By viewing this freedom as a kind of gauge-freedom, it is shown that DBOC and the resulting associated mass-dependent adiabatic PES are gauge-invariant quantities. A sum-over-states (SOS) formula for DBOC which explicitly exhibits this invariance is derived. A biorthogonal formulation suitable for DBOC computations using standard unnormalised coupled-cluster (CC) wave-functions is presented. This is shown to lead to a biorthogonal version of SOS formula with similar properties. On this basis, different computational schemes for evaluating DBOC using approximate CC wave-functions are derived. One of this agrees with the formula used in the current literature. The connection to adiabatic-to-diabatic transformations in non-adiabatic dynamics is explored and complications arising from biorthogonal nature of CC theory are identified.

  13. Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs

    2017-11-01

    A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.

  14. Orbital relaxation effects on Kohn–Sham frontier orbital energies in density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, DaDi; Zheng, Xiao, E-mail: xz58@ustc.edu.cn; Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026

    2015-04-21

    We explore effects of orbital relaxation on Kohn–Sham frontier orbital energies in density functional theory by using a nonempirical scaling correction approach developed in Zheng et al. [J. Chem. Phys. 138, 174105 (2013)]. Relaxation of Kohn–Sham orbitals upon addition/removal of a fractional number of electrons to/from a finite system is determined by a systematic perturbative treatment. The information of orbital relaxation is then used to improve the accuracy of predicted Kohn–Sham frontier orbital energies by Hartree–Fock, local density approximation, and generalized gradient approximation methods. The results clearly highlight the significance of capturing the orbital relaxation effects. Moreover, the proposed scalingmore » correction approach provides a useful way of computing derivative gaps and Fukui quantities of N-electron finite systems (N is an integer), without the need to perform self-consistent-field calculations for (N ± 1)-electron systems.« less

  15. Self-regulating neutron coincidence counter

    DOEpatents

    Baron, N.

    1980-06-16

    A device for accurately measuring the mass of /sup 240/Pu and /sup 239/Pu in a sample having arbitrary moderation and mixed with various contaminants. The device utilizes a thermal neutron well counter which has two concentric rings of neutron detectors separated by a moderating material surrounding the well. Neutron spectroscopic information derived by the two rings of detectors is used to measure the quantity of /sup 239/Pu and /sup 240/Pu in device which corrects for background radiation, deadtime losses of the detector and electronics and various other constants of the system.

  16. When the party continues: Impulsivity and the effect of employment on young adults' post-college alcohol use.

    PubMed

    Geisner, I M; Koopmann, J; Bamberger, P; Wang, M; Larimer, M E; Nahum-Shani, I; Bacharach, S

    2018-02-01

    The transition from college to work is both an exciting and potentially high risk time for young adults. As students transition from academic settings to full-time employment, they must navigate new social demands, work demands, and adjust their drinking behaviors accordingly. Research has shown that there are both protective factors and risk factors associated with starting a new job when it comes to alcohol use, and individual differences can moderate these factors. 1361 students were recruited from 4 geographically diverse universities and followed 1month pre- and 1month post-graduation. Drinking frequency, quantity, consequences, and impulsivity were assessed. Full-time employment was related to increased drinking quantity but not related to changes in other drinking outcomes. However, impulsivity moderated the relationship between employment and drinking. For those reporting higher levels of impulsivity at baseline, full-time employment was associated with an increase in drinking variables (quantity and frequency), whereas drinking was unaffected by full-time employment status among those reporting lower levels of impulsivity. Implications for future research are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Transfer entropy in physical systems and the arrow of time

    NASA Astrophysics Data System (ADS)

    Spinney, Richard E.; Lizier, Joseph T.; Prokopenko, Mikhail

    2016-08-01

    Recent developments have cemented the realization that many concepts and quantities in thermodynamics and information theory are shared. In this paper, we consider a highly relevant quantity in information theory and complex systems, the transfer entropy, and explore its thermodynamic role by considering the implications of time reversal upon it. By doing so we highlight the role of information dynamics on the nuanced question of observer perspective within thermodynamics by relating the temporal irreversibility in the information dynamics to the configurational (or spatial) resolution of the thermodynamics. We then highlight its role in perhaps the most enduring paradox in modern physics, the manifestation of a (thermodynamic) arrow of time. We find that for systems that process information such as those undergoing feedback, a robust arrow of time can be formulated by considering both the apparent physical behavior which leads to conventional entropy production and the information dynamics which leads to a quantity we call the information theoretic arrow of time. We also offer an interpretation in terms of optimal encoding of observed physical behavior.

  18. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  19. Nebulization Reflux Concentrator

    NASA Technical Reports Server (NTRS)

    Cofer, Wesley R., III; Collins, V. G.

    1986-01-01

    Nebulization reflux concentrator extracts and concentrates trace quantities of water-soluble gases for subsequent chemical analysis. Hydrophobic membrane and nebulizing nozzles form scrubber for removing trace quantities of soluble gases or other contaminants from atmosphere. Although hydrophobic membrane virtually blocks all transport of droplets, it offers little resistance to gas flow; hence, device permits relatively large volumes of gas scrubbed efficiently with very small volumes of liquid. This means analyzable quantities of contaminants concentrate in extracting solutions in much shorter times than with conventional techniques.

  20. Radius correction formula for capacitances and effective length vectors of monopole and dipole antenna systems

    NASA Astrophysics Data System (ADS)

    Macher, W.; Oswald, T. H.

    2011-02-01

    In the investigation of antenna systems which consist of one or several monopoles, a realistic modeling of the monopole radii is not always feasible. In particular, physical scale models for electrolytic tank measurements of effective length vectors (rheometry) of spaceborne monopoles are so small that a correct scaling of monopole radii often results in very thin, flexible antenna wires which bend too much under their own weight. So one has to use monopoles in the model which are thicker than the correct scale diameters. The opposite case, where the monopole radius has to be modeled too thin, appears with certain numerical antenna programs based on wire grid modeling. This problem arises if the underlying algorithm assumes that the wire segments are much longer than their diameters. In such a case it is eventually not possible to use wires of correct thickness to model the monopoles. In order that these numerical and experimental techniques can be applied nonetheless to determine the capacitances and effective length vectors of such monopoles (with an inaccurate modeling of monopole diameters), an analytical correction method is devised. It enables one to calculate the quantities for the real antenna system from those obtained for the model antenna system with wrong monopole radii. Since a typical application of the presented formalism is the analysis of spaceborne antenna systems, an illustration for the monopoles of the WAVES experiment on board the STEREO-A spacecraft is given.

  1. Early generation selection results from a two year, six location study

    USDA-ARS?s Scientific Manuscript database

    In potato breeding programs, early generation selections are rarely evaluated in multiple environments because of limited seed quantities. By the time seed quantities are available, few clones remain from the original population. The purpose of this study was to allow multiple locations to select ...

  2. Policy-relevant behaviours predict heavier drinking and mediate the relationship with age, gender and education status: Analysis from the International Alcohol Control study.

    PubMed

    Casswell, Sally; Huckle, Taisia; Wall, Martin; Parker, Karl; Chaiyasong, Surasak; Parry, Charles D H; Viet Cuong, Pham; Gray-Phillip, Gaile; Piazza, Marina

    2018-02-21

    To investigate behaviours related to four alcohol policy variables (policy-relevant behaviours) and demographic variables in relation to typical quantities of alcohol consumed on-premise in six International Alcohol Control study countries. General population surveys with drinkers using a comparable survey instrument and data analysed using path analysis in an overall model and for each country. typical quantities per occasion consumed on-premise; gender, age; years of education, prices paid, time of purchase, time to access alcohol and liking for alcohol advertisements. In the overall model younger people, males and those with fewer years of education consumed larger typical quantities. Overall lower prices paid, later time of purchase and liking for alcohol ads predicted consuming larger typical quantities; this was found in the high-income countries, less consistently in the high-middle-income countries and not in the low middle-income country. Three policy-relevant behaviours (prices paid, time of purchase, liking for alcohol ads) mediated the relationships between age, gender, education and consumption in high-income countries. International Alcohol Control survey data showed a relationship between policy-relevant behaviours and typical quantities consumed and support the likely effect of policy change (trading hours, price and restrictions on marketing) on heavier drinking. The path analysis also revealed policy-relevant behaviours were significant mediating variables between the effect of age, gender and educational status on consumption. However, this relationship is clearest in high-income countries. Further research is required to understand better how circumstances in low-middle-income countries impact effects of policies. © 2018 The Authors Drug and Alcohol Review published by John Wiley & Sons Australia, Ltd on behalf of Australasian Professional Society on Alcohol and other Drugs.

  3. Glimmers of a Quantum KAM Theorem: Insights from Quantum Quenches in One-Dimensional Bose Gases

    DOE PAGES

    Brandino, G. P.; Caux, J. -S.; Konik, R. M.

    2015-12-16

    Real-time dynamics in a quantum many-body system are inherently complicated and hence difficult to predict. There are, however, a special set of systems where these dynamics are theoretically tractable: integrable models. Such models possess non-trivial conserved quantities beyond energy and momentum. These quantities are believed to control dynamics and thermalization in low dimensional atomic gases as well as in quantum spin chains. But what happens when the special symmetries leading to the existence of the extra conserved quantities are broken? Is there any memory of the quantities if the breaking is weak? Here, in the presence of weak integrability breaking,more » we show that it is possible to construct residual quasi-conserved quantities, so providing a quantum analog to the KAM theorem and its attendant Nekhoreshev estimates. We demonstrate this construction explicitly in the context of quantum quenches in one-dimensional Bose gases and argue that these quasi-conserved quantities can be probed experimentally.« less

  4. Autofocus algorithm for curvilinear SAR imaging

    NASA Astrophysics Data System (ADS)

    Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.

    2012-05-01

    We describe an approach to autofocusing for large apertures on curved SAR trajectories. It is a phase-gradient type method in which phase corrections compensating trajectory perturbations are estimated not directly from the image itself, but rather on the basis of partial" SAR data { functions of the slow and fast times { recon- structed (by an appropriate forward-projection procedure) from windowed scene patches, of sizes comparable to distances between distinct targets or localized features of the scene. The resulting partial data" can be shown to contain the same information on the phase perturbations as that in the original data, provided the frequencies of the perturbations do not exceed a quantity proportional to the patch size. The algorithm uses as input a sequence of conventional scene images based on moderate-size subapertures constituting the full aperture for which the phase corrections are to be determined. The subaperture images are formed with pixel sizes comparable to the range resolution which, for the optimal subaperture size, should be also approximately equal the cross-range resolution. The method does not restrict the size or shape of the synthetic aperture and can be incorporated in the data collection process in persistent sensing scenarios. The algorithm has been tested on the publicly available set of GOTCHA data, intentionally corrupted by random-walk-type trajectory uctuations (a possible model of errors caused by imprecise inertial navigation system readings) of maximum frequencies compatible with the selected patch size. It was able to eciently remove image corruption for apertures of sizes up to 360 degrees.

  5. Method of composing two-dimensional scanned spectra observed by the New Vacuum Solar Telescope

    NASA Astrophysics Data System (ADS)

    Cai, Yun-Fang; Xu, Zhi; Chen, Yu-Chao; Xu, Jun; Li, Zheng-Gang; Fu, Yu; Ji, Kai-Fan

    2018-04-01

    In this paper we illustrate the technique used by the New Vacuum Solar Telescope (NVST) to increase the spatial resolution of two-dimensional (2D) solar spectroscopy observations involving two dimensions of space and one of wavelength. Without an image stabilizer at the NVST, large scale wobble motion is present during the spatial scanning, whose instantaneous amplitude can reach 1.3″ due to the Earth’s atmosphere and the precision of the telescope guiding system, and seriously decreases the spatial resolution of 2D spatial maps composed with scanned spectra. We make the following effort to resolve this problem: the imaging system (e.g., the TiO-band) is used to record and detect the displacement vectors of solar image motion during the raster scan, in both the slit and scanning directions. The spectral data (e.g., the Hα line) which are originally obtained in time sequence are corrected and re-arranged in space according to those displacement vectors. Raster scans are carried out in several active regions with different seeing conditions (two rasters are illustrated in this paper). Given a certain spatial sampling and temporal resolution, the spatial resolution of the composed 2D map could be close to that of the slit-jaw image. The resulting quality after correction is quantitatively evaluated with two methods. A physical quantity, such as the line-of-sight velocities in multiple layers of the solar atmosphere, is also inferred from the re-arranged spectrum, demonstrating the advantage of this technique.

  6. A new time-independent formulation of fractional release

    NASA Astrophysics Data System (ADS)

    Ostermöller, Jennifer; Bönisch, Harald; Jöckel, Patrick; Engel, Andreas

    2017-03-01

    The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point into the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the ozone depletion potential (ODP). In this context time-independent values are needed which, in particular, should be independent of the trends in the tropospheric mixing ratios (tropospheric trends) of the respective halogenated trace gases. For a given atmospheric situation, such FRF values would represent a molecular property.We analysed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965 and 2011 on different mean age levels and found that the widely used formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the widely used calculation method of FRF.Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time dependence in FRFs. Therefore we implemented a loss term in the formulation of the FRF and applied the parameterization of a mean arrival time to our data set.We find that the time dependence in the FRF can almost be compensated for by applying a new trend correction in the calculation of the FRF. We suggest that this new method should be used to calculate time-independent FRFs, which can then be used e.g. for the calculation of ODP.

  7. Meteoroid Orbits from Observations

    NASA Astrophysics Data System (ADS)

    Campbell-Brown, Margaret

    2018-04-01

    Millions of orbits of meteoroids have been measured over the last few decades, and they comprise the largest sample of orbits of solar system bodies which exists. The orbits of these objects can shed light on the distribution and evolution of comets and asteroids in near-Earth space (e.g. Neslusan et al. 2016). If orbits can be measured at sufficiently high resolution, individual meteoroids can be traced back to their parent bodies and, in principle, even to their ejection time (Rudawska et al. 2012). Orbits can be measured with multi-station optical observations or with radar observations.The most fundamental measured quantities are the speed of the meteor and the two angles of the radiant, or point in the sky from which the meteor appears to come. There are many methods used to determine these from observations, but not all produce the most accurate results (Egal et al. 2017). These three measured quantities, along with the time and location of the observation, are sufficient to obtain an orbit (see, e.g., Clark & Wiegert 2011), but the measurements must be corrected for the deceleration of the meteoroid in the atmosphere before it was detected, the rotation of the Earth, and the gravitational attraction of the Earth (including higher order moments if great precision is necessary).Once meteor orbits have been determined, studies of the age and origin of meteor showers (Bruzzone et al., 2015), the parent bodies of sporadic sources (Pokorny et al. 2014), and the dynamics of the meteoroid complex as a whole can be constrained.Bruzzone, J. S., Brown, P., Weryk, R., Campbell-Brown, M., 2015. MNRAS 446, 1625.Clark, D., Wiegert, P., 2011. M&PS 46, 1217.Egal, A., Gural, P., Vaubaillon, J., Colas, F., Thuillot, W., 2017. Icarus 294, 43.Neslusan, L., Vaubaillon, J., Hajdukova, M., 2016. A&A 589, id.A100.Pokorny, P., Vokrouhlicky, D., Nesvorny, D., Campbell-Brown, M., Brown, P., 2014. ApJ 789, id.25.Rudawska, R., Vaubaillon, J., Atreya, P., 2012. A&A 541, id.A2

  8. Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Auriault, Laurent

    1996-01-01

    It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.

  9. Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Permala, R.; Jayani, A. P. S.

    2018-05-01

    LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.

  10. Thermodynamics of de Sitter Black Holes in Massive Gravity

    NASA Astrophysics Data System (ADS)

    Ma, Yu-Bo; Zhang, Si-Xuan; Wu, Yan; Ma, Li; Cao, Shuo

    2018-05-01

    In this paper, by taking de Sitter space-time as a thermodynamic system, we study the effective thermodynamic quantities of de Sitter black holes in massive gravity, and furthermore obtain the effective thermodynamic quantities of the space-time. Our results show that the entropy of this type of space-time takes the same form as that in Reissner-Nordström-de Sitter space-time, which lays a solid foundation for deeply understanding the universal thermodynamic characteristics of de Sitter space-time in the future. Moreover, our analysis indicates that the effective thermodynamic quantities and relevant parameters play a very important role in the investigation of the stability and evolution of de Sitter space-time. Supported by the Young Scientists Fund of the National Natural Science Foundation of China under Grant Nos. 11605107 and 11503001, the National Natural Science Foundation of China under Grant No. 11475108, Program for the Innovative Talents of Higher Learning Institutions of Shanxi, the Natural Science Foundation of Shanxi Province under Grant No. 201601D102004, the Natural Science Foundation for Young Scientists of Shanxi Province under Grant No. 201601D021022, and the Natural Science Foundation of Datong City under Grant No. 20150110

  11. Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.

    PubMed

    Okubo, T; Shibata, H; Takishima, T

    1983-07-01

    By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.

  12. NEW APPROACHES TO ESTIMATION OF SOLID-WASTE QUANTITY AND COMPOSITION

    EPA Science Inventory

    Efficient and statistically sound sampling protocols for estimating the quantity and composition of solid waste over a stated period of time in a given location, such as a landfill site or at a specific point in an industrial or commercial process, are essential to the design ...

  13. 77 FR 46943 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-07

    ... compliance time; therefore, an operator may choose to add the reinforcing fiberglass overcoat before the... discrepant quantities of nut plates and types of fasteners called out in Boeing Alert Service Bulletin 747... different airplane configuration that might use a different quantity of nut plates than what is specified in...

  14. Photoprotection by sunscreen depends on time spent on application.

    PubMed

    Heerfordt, Ida M; Torsnes, Linnea R; Philipsen, Peter A; Wulf, Hans Christian

    2018-03-01

    To be effective, sunscreens must be applied in a sufficient quantity and reapplication is recommended. No previous study has investigated whether time spent on sunscreen application is important for the achieved photoprotection. To determine whether time spent on sunscreen application is related to the amount of sunscreen used during a first and second application. Thirty-one volunteers wearing swimwear applied sunscreen twice in a laboratory environment. Time spent and the amount of sunscreen used during each application was measured. Subjects' body surface area accessible for sunscreen application (BSA) was estimated from their height, weight and swimwear worn. The average applied quantity of sunscreen after each application was calculated. Subjects spent on average 4 minutes and 15 seconds on the first application and approximately 85% of that time on the second application. There was a linear relationship between time spent on application and amount of sunscreen used during both the first and the second application (P < .0001). Participants applied 2.21 grams of sunscreen per minute during both applications. After the first application, subjects had applied a mean quantity of sunscreen of 0.71 mg/cm 2 on the BSA, and after the second application, a mean total quantity of 1.27 mg/cm 2 had been applied. We found that participants applied a constant amount of sunscreen per minute during both a first and a second application. Measurement of time spent on application of sunscreen on different body sites may be useful in investigating the distribution of sunscreen in real-life settings. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Enhanced tunability of the composition in silicon oxynitride thin films by the reactive gas pulsing process

    NASA Astrophysics Data System (ADS)

    Aubry, Eric; Weber, Sylvain; Billard, Alain; Martin, Nicolas

    2014-01-01

    Silicon oxynitride thin films were sputter deposited by the reactive gas pulsing process. Pure silicon target was sputtered in Ar, N2 and O2 mixture atmosphere. Oxygen gas was periodically and solely introduced using exponential signals. In order to vary the injected O2 quantity in the deposition chamber during one pulse at constant injection time (TON), the tau mounting time τmou of the exponential signals was systematically changed for each deposition. Taking into account the real-time measurements of the discharge voltage and the I(O*)/I(Ar*) emission lines ratio, it is shown that the oscillations of the discharge voltage during the TON and TOFF times (injection of O2 stopped) are attributed to the preferential adsorption of the oxygen compared to that of the nitrogen. The sputtering mode alternates from a fully nitrided mode (TOFF time) to a mixed mode (nitrided and oxidized mode) during the TON time. For the highest injected O2 quantities, the mixed mode tends toward a fully oxidized mode due to an increase of the trapped oxygen on the target. The oxygen (nitrogen) concentration in the SiOxNy films similarly (inversely) varies as the oxygen is trapped. Moreover, measurements of the contamination speed of the Si target surface are connected to different behaviors of the process. At low injected O2 quantities, the nitrided mode predominates over the oxidized one during the TON time. It leads to the formation of Si3N4-yOy-like films. Inversely, the mixed mode takes place for high injected O2 quantities and the oxidized mode prevails against the nitrided one producing SiO2-xNx-like films.

  16. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  17. Difference in quantity discrimination in dogs and wolves

    PubMed Central

    Range, Friederike; Jenikejew, Julia; Schröder, Isabelle; Virányi, Zsófia

    2014-01-01

    Certain aspects of social life, such as engaging in intergroup conflicts, as well as challenges posed by the physical environment, may facilitate the evolution of quantity discrimination. In lack of excessive comparative data, one can only hypothesize about its evolutionary origins, but human-raised wolves performed well when they had to choose the larger of two sets of 1–4 food items that had been sequentially placed into two opaque cans. Since in such paradigms, the animals never see the entire content of either can, their decisions are thought to rely on mental representation of the two quantities rather than on some perceptual factors such as the overall volume or surface area of the two amounts. By equaling the time that it takes to enter each quantity into the cans or the number of items entered, one can further rule out the possibility that animals simply choose based on the amount of time needed to present the two quantities. While the wolves performed well even in such a control condition, dogs failed to choose the larger one of two invisible quantities in another study using a similar paradigm. Because this disparity could be explained by procedural differences, in the current study, we set out to test dogs that were raised and kept identically as the previously tested wolves using the same set-up and procedure. Our results confirm the former finding that dogs, in comparison to wolves, have inferior skills to represent quantities mentally. This seems to be in line with Frank’s (1980) hypothesis suggesting that domestication altered the information processing of dogs. However, as discussed, also alternative explanations may exist. PMID:25477834

  18. Relationship between interphasic nucleolar organizer regions and growth rate in two neuroblastoma cell lines.

    PubMed Central

    Derenzini, M.; Pession, A.; Farabegoli, F.; Trerè, D.; Badiali, M.; Dehan, P.

    1989-01-01

    The relationship between the quantity of silver-stained interphasic nucleolar organizer regions (NORs) and nuclear synthetic activity, caryotype, and growth rate was studied in two established neuroblastoma cell lines (CHP 212 and HTB 10). Statistical analysis of silver-stained NORs revealed four times as many in CHP 212 cells compared with HTB 10 cells. No difference was observed in the ribosomal RNA synthesis between the two cell lines. The caryotype index was 1.2 for CHP 212 and 1.0 for HTB 10 cells. The number of chromosomes carrying NORs and the quantity of ribosomal genes was found to be the same for the two cell lines. Doubling time of CHP 212 cells was 20 hours compared with 54 hours for HTB 10 cells. In CHP 212 cells bindering of cell duplication by serum deprivation induced a progressive lowering (calculated at 48, 72, and 96 hours) of the quantity of silver-stained interphasic NORs. Recovery of duplication by new serum addition induced, after 24 hours, an increase of the quantity of silver-stained interphasic NORs up to control levels. In the light of available data, these results indicate that the quantity of interphasic NORs is strictly correlated only to the growth rate of the cell. Images Figure 2 Figure 3 Figure 4 PMID:2705511

  19. SU-E-T-164: Clinical Implementation of ASi EPID Panels for QA of IMRT/VMAT Plans.

    PubMed

    Hosier, K; Wu, C; Beck, K; Radevic, M; Asche, D; Bareng, J; Kroner, A; Lehmann, J; Logsdon, M; Dutton, S; Rosenthal, S

    2012-06-01

    To investigate various issues for clinical implementation of aSi EPID panels for IMRT/VMAT QA. Six linacs are used in our clinic for EPID-based plan QA; two Varian Truebeams, two Varian 2100 series, two Elekta Infiniti series. Multiple corrections must be accounted for in the calibration of each panel for dosimetric use. Varian aSi panels are calibrated with standard dark field, flood field, and 40×40 diagonal profile for beam profile correction. Additional corrections to account for off-axis and support arm backscatter are needed for larger field sizes. Since Elekta iViewGT system does not export gantry angle with images, a third-party inclinometer must be physically mounted to back of linac gantry and synchronized with data acquisition via iViewGT PC clock. A T/2 offset correctly correlates image and gantry angle for arc plans due to iView image time stamp at the end of data acquisition for each image. For both Varian and Elekta panels, a 5 MU 10×10 calibration field is used to account for the nonlinear MU to dose response at higher energies. Acquired EPID images are deconvolved via a high pass filter in Fourier space and resultant fluence maps are used to reconstruct a 3D dose 'delivered' to patient using DosimetryCheck. Results are compared to patient 3D dose computed by TPS using a 3D-gamma analysis. 120 IMRT and 100 VMAT cases are reported. Two 3D gamma quantities (Gamma(V10) and Gamma(PTV)) are proposed for evaluating QA results. The Gamma(PTV) is sensitive to MLC offsets while Gamma(V10) is sensitive to gantry rotations. When a 3mm/3% criteria and 90% or higher 3D gamma pass rate is used, all IMRT and 90% of VMAT QA pass QA. After appropriate calibration of aSi panels and setup of image acquisition systems, EPID based 3D dose reconstruction method is found clinically feasible. © 2012 American Association of Physicists in Medicine.

  20. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  1. BOOK REVIEW: Multipole Theory in Electromagnetism: Classical, Quantum and Symmetry Aspects, with Applications

    NASA Astrophysics Data System (ADS)

    Sihvola, Ari

    2005-03-01

    `Good reasons must, of force, give place to better', observes Brutus to Cassius, according to William Shakespeare in Julius Caesar. Roger Raab and Owen de Lange seem to agree, as they cite this sentence in the concluding chapter of their new book on the importance of exact multipole analysis in macroscopic electromagnetics. Very true and essential to remember in our daily research work. The two scientists from the University of Natal in Pietermaritzburg, South Africa (presently University of KwaZulu-Natal) have been working for a very long time on the accurate description of electric and magnetic response of matter and have published much of their findings in various physics journals. The present book gives us a clear and coherent exposition of many of these results. The important message of Raab and de Lange is that in the macroscopic description of matter, a correct balance between the various orders of electric and magnetic multipole terms has to be respected. If the inclusion of magnetic dipole terms is not complemented with electric quadrupoles, there is a risk of losing the translational invariance of certain important quantities. This means that the values of these quantities depend on the choice of the origin! `It canÂ't be Nature, for it is not sense' is another of the apt literary citations in the book. Often monographs written by researchers look like they have been produced using a cut-and-paste technique; earlier published articles are included in the same book but, unfortunately, too little additional effort is expended into moulding the totality into a unified story. This is not the case with Raab and de Lange. The structure and the text flow of the book serve perfectly its important message. After the obligatory introduction of material response to electromagnetic fields, constitutive relations, basic quantum theory and spacetime properties, a chapter follows with transmission and scattering effects where everything seems to work well with the `old' multipole theory. But then the focus is shifted to observables associated with the reflection of waves from a surface. And there the classical analysis fails. This gives the motivation for the following chapters where the transformed multipole theory is represented. As expected, the correct multipole balance restores the physicality of the results in the reflection problem. One of the healthy reminders for an electrical engineer-scientist reading the book is the fact that E and B are the primary electric and magnetic fields. The other two field quantities, D and H, are the response fields (which, by the way, are also shown to be origin-dependent and poorly\\endcolumn defined in the framework of classical multipole theory). In defence, however, for these poor latter quantities one can mention the many advantages of the engineering-type constitutive relations where D and B are expressed as responses to E and H. An example is the beautiful symmetry and complete analogy between the electric and magnetic quantities (voltage becomes current and vice versa in the duality transformation) which helps us write down solutions to electromagnetic problems from other known cases. From a pragmatic point of view we would also favour the use of quantities like Poynting vector and energy density (which require the H field). Another discussion-provoking question to the authors of the book might be whether their new multipole balance could be broken in the analysis of artificial materials. New nanotechnological discoveries and devices make it look like engineers can do anything. Perhaps in the design of complex media and metamaterials, a hot topic in todayÂ's materials science, such macroscopic responses can be tailored where a certain high-order multipole contribution dominates over other, more basic ones. Multiple Theory in Electromagnetism is suitable for a broad spectrum of readers: solid-state physicists, molecular chemists, theoretical and experimental optics scientists, radiophysics experts, electromagnetists and other electrical engineers, students and working scientists alike. This is a wonderful book. It certainly should appeal to them all.

  2. Zone clearance in an infinite TASEP with a step initial condition

    NASA Astrophysics Data System (ADS)

    Cividini, Julien; Appert-Rolland, Cécile

    2017-06-01

    The TASEP is a paradigmatic model of out-of-equilibrium statistical physics, for which many quantities have been computed, either exactly or by approximate methods. In this work we study two new kinds of observables that have some relevance in biological or traffic models. They represent the probability for a given clearance zone of the lattice to be empty (for the first time) at a given time, starting from a step density profile. Exact expressions are obtained for single-time quantities, while more involved history-dependent observables are studied by Monte Carlo simulation, and partially predicted by a phenomenological approach.

  3. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  4. Environmental factor(tm) system: RCRA hazardous waste handler information (on CD-ROM). Data file

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-11-01

    Environmental Factor(trademark) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity, and compliance history for facilities found in the EPA Research Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management, and minimization by companies who are large quantity generators; and (3) Data on the waste management practices of treatment, storage, and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action, or violation information, TSD status, generator and transporter status, and more. (2) View compliance information - dates of evaluation, violation, enforcement, and corrective action. (3) Lookup facilities by waste processing categories of marketing, transporting, processing, and energy recovery. (4) Use owner/operator information and names, titles, and telephone numbers of project managers for prospecting. (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving, and exporting.« less

  5. Validating the applicability of the GUM procedure

    NASA Astrophysics Data System (ADS)

    Cox, Maurice G.; Harris, Peter M.

    2014-08-01

    This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.

  6. Study of LED modulation effect on the photometric quantities and beam homogeneity of automotive lighting

    NASA Astrophysics Data System (ADS)

    Koudelka, Petr; Hanulak, Patrik; Jaros, Jakub; Papes, Martin; Latal, Jan; Siska, Petr; Vasinek, Vladimir

    2015-07-01

    This paper discusses the implementation of a light emitting diode based visible light communication system for optical vehicle-to-vehicle (V2V) communications in road safety applications. The widespread use of LEDs as light sources has reached into automotive fields. For example, LEDs are used for taillights, daytime running lights, brake lights, headlights, and traffic signals. Future in the optical vehicle-to-vehicle (V2V) communications will be based on an optical wireless communication technology that using LED transmitter and a camera receiver (OCI; optical communication image sensor). Utilization of optical V2V communication systems in automotive industry naturally brings a lot of problems. Among them belongs necessity of circuit implementation into the current concepts of electronic LED lights control that allows LED modulation. These circuits are quite complicated especially in case of luxury cars. Other problem is correct design of modulation circuits so that final vehicle lightning using optical vehicle-to-vehicle (V2V) communication meets standard requirements on Photometric Quantities and Beam Homogeneity. Authors of this article performed research on optical vehicle-to-vehicle (V2V) communication possibilities of headlight (Jaguar) and taillight (Skoda) in terms of modulation circuits (M-PSK, M-QAM) implementation into the lamp concepts and final fulfilment of mandatory standards on Photometric Quantities and Beam Homogeneity.

  7. Monte Carlo calculation of proton stopping power and ranges in water for therapeutic energies

    NASA Astrophysics Data System (ADS)

    Bozkurt, Ahmet

    2017-09-01

    Monte Carlo is a statistical technique for obtaining numerical solutions to physical or mathematical problems that are analytically impractical, if not impossible, to solve. For charged particle transport problems, it presents many advantages over deterministic methods since such problems require a realistic description of the problem geometry, as well as detailed tracking of every source particle. Thus, MC can be considered as a powerful alternative to the well-known Bethe-Bloche equation where an equation with various corrections is used to obtain stopping power and ranges of electrons, positrons, protons, alphas, etc. This study presents how a stochastic method such as MC can be utilized to obtain certain quantities of practical importance related to charged particle transport. Sample simulation geometries were formed for water medium where disk shaped thin detectors were employed to compute average values of absorbed dose and flux at specific distances. For each detector cell, these quantities were utilized to evaluate the values of the range and the stopping power, as well as the shape of Bragg curve, for mono-energetic point source pencil beams of protons. The results were found to be ±2% compared to the data from the NIST compilation. It is safe to conclude that this approach can be extended to determine dosimetric quantities for other media, energies and charged particle types.

  8. The effect of urban green on small-area (healthy) life expectancy.

    PubMed

    Jonker, M F; van Lenthe, F J; Donkers, B; Mackenbach, J P; Burdorf, A

    2014-10-01

    Several epidemiological studies have investigated the effect of the quantity of green space on health outcomes such as self-rated health, morbidity and mortality ratios. These studies have consistently found positive associations between the quantity of green and health. However, the impact of other aspects, such as the perceived quality and average distance to public green, and the effect of urban green on population health are still largely unknown. Linear regression models were used to investigate the impact of three different measures of urban green on small-area life expectancy (LE) and healthy life expectancy (HLE) in The Netherlands. All regressions corrected for average neighbourhood household income, accommodated spatial autocorrelation, and took measurement uncertainty of LE, HLE as well as the quality of urban green into account. Both the quantity and the perceived quality of urban green are modestly related to small-area LE and HLE: an increase of 1 SD in the percentage of urban green space is associated with a 0.1-year higher LE, and, in the case of quality of green, with an approximately 0.3-year higher LE and HLE. The average distance to the nearest public green is unrelated to population health. The quantity and particularly quality of urban green are positively associated with small-area LE and HLE. This concurs with a growing body of evidence that urban green reduces stress, stimulates physical activity, improves the microclimate and reduces ambient air pollution. Accordingly, urban green development deserves a more prominent place in urban regeneration and neighbourhood renewal programmes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. The effects of substrate size, surface area, and density on coat thickness of multi-particulate dosage forms.

    PubMed

    Heinicke, Grant; Matthews, Frank; Schwartz, Joseph B

    2005-01-01

    Drugs layering experiments were performed in a fluid bed fitted with a rotor granulator insert using diltiazem as a model drug. The drug was applied in various quantities to sugar spheres of different mesh sizes to give a series of drug-layered sugar spheres (cores) of different potency, size, and weight per particle. The drug presence lowered the bulk density of the cores in proportion to the quantity of added drug. Polymer coating of each core lot was performed in a fluid bed fitted with a Wurster insert. A series of polymer-coated cores (pellets) was removed from each coating experiment. The mean diameter of each core and each pellet sample was determined by image analysis. The rate of change of diameter on polymer addition was determined for each starting size of core and compared to calculated values. The core diameter was displaced from the line of best fit through the pellet diameter data. Cores of different potency with the same size distribution were made by layering increasing quantities of drug onto sugar spheres of decreasing mesh size. Equal quantities of polymer were applied to the same-sized core lots and coat thickness was measured. Weight/weight calculations predict equal coat thickness under these conditions, but measurable differences were found. Simple corrections to core charge weight in the Wurster insert were successfully used to manufacture pellets having the same coat thickness. The sensitivity of the image analysis technique in measuring particle size distributions (PSDs) was demonstrated by measuring a displacement in PSD after addition of 0.5% w/w talc to a pellet sample.

  10. A New Availability-Payment Model for Pricing Performance-Based Logistics Contracts

    DTIC Science & Technology

    2014-05-01

    Petri   net ) is used to  capture concurrency and  synchronization...properties of the  system. Petri   Net Available IN 1`A failure indication IN Down Repair Shop IN Down Replace Order IN Replace Order U costs PC (1,2) time...action (expTime(50*w)); INT Repairs AV Replaces AV AV av.req AV DOWNTIME 0 INT 0 INT Manufacturer Quantity Inventory 5 Quantity stock Cost Book

  11. The perception of phonological quantity based on durational cues by native speakers, second-language users and nonspeakers of Finnish.

    PubMed

    Ylinen, Sari; Shestakova, Anna; Alku, Paavo; Huotilainen, Minna

    2005-01-01

    Some languages, such as Finnish, use speech-sound duration as the primary cue for a phonological quantity distinction. For second-language (L2) learners, quantity is often difficult to master if speech-sound duration plays a less important role in the phonology of their native language (L1). By comparing the categorization performance of native speakers of Finnish, Russian L2 users of Finnish, and non-Finnish-speaking Russians, the present study aimed to determine whether the L2 users, whose native language does not have a quantity distinction, have been able to establish categories for Finnish quantity. The results suggest that the native speakers and some of the L2 users that have been exposed to Finnish for a longer time have access to phonological quantity categories, whereas the L2 users with shorter exposure and the non-Finnish-speaking subjects do not. In addition, by comparing categorization and discrimination tasks it was found that the native speakers show a phoneme-boundary effect for quantity that is cued by duration only, whereas the non-Finnish-speaking subjects and the subjects with low proficiency in Finnish do not.

  12. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Astrophysics Data System (ADS)

    Watmuff, Jonathan H.

    1992-10-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  13. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Technical Reports Server (NTRS)

    Watmuff, Jonathan H.

    1992-01-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  14. A Perishable Inventory Model with Return

    NASA Astrophysics Data System (ADS)

    Setiawan, S. W.; Lesmono, D.; Limansyah, T.

    2018-04-01

    In this paper, we develop a mathematical model for a perishable inventory with return by assuming deterministic demand and inventory dependent demand. By inventory dependent demand, it means that demand at certain time depends on the available inventory at that time with certain rate. In dealing with perishable items, we should consider deteriorating rate factor that corresponds to the decreasing quality of goods. There are also costs involved in this model such as purchasing, ordering, holding, shortage (backordering) and returning costs. These costs compose the total costs in the model that we want to minimize. In the model we seek for the optimal return time and order quantity. We assume that after some period of time, called return time, perishable items can be returned to the supplier at some returning costs. The supplier will then replace them in the next delivery. Some numerical experiments are given to illustrate our model and sensitivity analysis is performed as well. We found that as the deteriorating rate increases, returning time becomes shorter, the optimal order quantity and total cost increases. When considering the inventory-dependent demand factor, we found that as this factor increases, assuming a certain deteriorating rate, returning time becomes shorter, optimal order quantity becomes larger and the total cost increases.

  15. Random SU(2) invariant tensors

    NASA Astrophysics Data System (ADS)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  16. SHARPs - A Near-Real-Time Space Weather Data Product from HMI

    NASA Astrophysics Data System (ADS)

    Bobra, M.; Turmon, M.; Baldner, C.; Sun, X.; Hoeksema, J. T.

    2012-12-01

    A data product from the Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO), called Space-weather HMI Active Region Patches (SHARPs), is now available through the SDO Joint Science Operations Center (JSOC) and the Virtual Solar Observatory. SHARPs are magnetically active regions identified on the solar disk and tracked automatically in time. SHARP data are processed within a few hours of the observation time. The SHARP data series contains active region-sized disambiguated vector magnetic field data in both Lambert Cylindrical Equal-Area and CCD coordinates on a 12 minute cadence. The series also provides simultaneous HMI maps of the line-of-sight magnetic field, continuum intensity, and velocity on the same ~0.5 arc-second pixel grid. In addition, the SHARP data series provides space weather quantities computed on the inverted, disambiguated, and remapped data. The values for each tracked region are computed and updated in near real time. We present space weather results for several X-class flares; furthermore, we compare said space weather quantities with helioseismic quantities calculated using ring-diagram analysis.

  17. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    NASA Astrophysics Data System (ADS)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  18. 10 CFR 40.22 - Small quantities of source material.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Small quantities of source material. 40.22 Section 40.22 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF SOURCE MATERIAL General Licenses § 40.22 Small... (15.4 lb) of uranium, removed during the treatment of drinking water, at any one time. A person may...

  19. 21 CFR 516.36 - Insufficient quantities of MUMS-designated drugs.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS NEW ANIMAL DRUGS FOR MINOR USE AND MINOR SPECIES Designation of a Minor Use or Minor Species New Animal Drug § 516.36 Insufficient quantities of... the 7-year period of exclusive marketing rights. (b) If, within the time that FDA specifies, the...

  20. 21 CFR 516.36 - Insufficient quantities of MUMS-designated drugs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS NEW ANIMAL DRUGS FOR MINOR USE AND MINOR SPECIES Designation of a Minor Use or Minor Species New Animal Drug § 516.36 Insufficient quantities of... the 7-year period of exclusive marketing rights. (b) If, within the time that FDA specifies, the...

  1. 21 CFR 516.36 - Insufficient quantities of MUMS-designated drugs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS NEW ANIMAL DRUGS FOR MINOR USE AND MINOR SPECIES Designation of a Minor Use or Minor Species New Animal Drug § 516.36 Insufficient quantities of... the 7-year period of exclusive marketing rights. (b) If, within the time that FDA specifies, the...

  2. 21 CFR 516.36 - Insufficient quantities of MUMS-designated drugs.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS NEW ANIMAL DRUGS FOR MINOR USE AND MINOR SPECIES Designation of a Minor Use or Minor Species New Animal Drug § 516.36 Insufficient quantities of... the 7-year period of exclusive marketing rights. (b) If, within the time that FDA specifies, the...

  3. 21 CFR 516.36 - Insufficient quantities of MUMS-designated drugs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS NEW ANIMAL DRUGS FOR MINOR USE AND MINOR SPECIES Designation of a Minor Use or Minor Species New Animal Drug § 516.36 Insufficient quantities of... the 7-year period of exclusive marketing rights. (b) If, within the time that FDA specifies, the...

  4. Childhood Adversities and Adult Cardiometabolic Health: Does the Quantity, Timing, and Type of Adversity Matter?

    PubMed Central

    Friedman, Esther M.; Montez, Jennifer Karas; Sheehan, Connor McDevitt; Guenewald, Tara L.; Seeman, Teresa E.

    2015-01-01

    Objective Adverse events in childhood can indelibly influence adult health. While evidence for this association has mounted, a fundamental set of questions about how to operationalize adverse events has been understudied. Method We used data from the National Survey of Midlife Development in the United States to examine how quantity, timing, and types of adverse events in childhood are associated with adult cardiometabolic health. Results The best-fitting specification of quantity of events was a linear measure reflecting a dose–response relationship. Timing of event mattered less than repeated exposure to events. Regarding the type of event, academic interruptions and sexual/physical abuse were most important. Adverse childhood events elevated the risk of diabetes and obesity similarly for men and women but had a greater impact on women’s risk of heart disease. Discussion Findings demonstrate the insights that can be gleaned about the early-life origins of adult health by examining operationalization of childhood exposures. PMID:25903978

  5. Breaking CMB degeneracy in dark energy through LSS

    NASA Astrophysics Data System (ADS)

    Lee, Seokcheon

    2016-03-01

    The cosmic microwave background (CMB) and large-scale structure (LSS) are complementary probes in the investigatation of the early and late time Universe. After the current accomplishment of the high accuracies of CMB measurements, accompanying precision cosmology from LSS data is emphasized. We investigate the dynamical dark energy (DE) models which can produce the same CMB angular power spectra as that of the Λ CDM model with less than a sub-percent level accuracy. If one adopts the dynamical DE models using the so-called Chevallier-Polarski-Linder (CPL) parametrization, ω equiv ω 0 + ω a(1-a), then one obtains models (ω 0,ω a) = (-0.8,-0.767),(-0.9,-0.375), (-1.1,0.355), (-1.2,0.688) named M8, M9, M11, and M12, respectively. The differences of the growth rate, f, which is related to the redshift-space distortions (RSD) between different DE models and the Λ CDM model are about 0.2 % only at z = 0. The difference of f between M8 (M9, M11, M12) and the Λ CDM model becomes maximum at z ˜eq 0.25 with -2.4 (-1.2, 1.2, 2.5) %. This is a scale-independent quantity. One can investigate the one-loop correction of the matter power spectrum of each model using the standard perturbation theory in order to probe the scale-dependent quantity in the quasi-linear regime (i.e. k le 0.4 {h^{-1} Mpc}). The differences in the matter power spectra including the one-loop correction between M8 (M9, M11, M12) and the Λ CDM model for the k= 0.4 {h^{-1} Mpc} scale are 1.8 (0.9, 1.2, 3.0) % at z=0, 3.0 (1.6, 1.9, 4.2) % at z=0.5, and 3.2 (1.7, 2.0, 4.5) % at z=1.0. The larger departure from -1 of ω 0, the larger the difference in the power spectrum. Thus, one should use both the RSD and the quasi-linear observable in order to discriminate a viable DE model among a slew of the models which are degenerate in CMB. Also we obtain the lower limit on ω 0> -1.5 from the CMB acoustic peaks and this will provide a useful limitation on phantom models.

  6. An optimal policy for deteriorating items with time-proportional deterioration rate and constant and time-dependent linear demand rate

    NASA Astrophysics Data System (ADS)

    Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu

    2017-12-01

    In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.

  7. Analysis of the neutron time-of-flight spectra from inertial confinement fusion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatarik, R., E-mail: hatarik1@llnl.gov; Sayre, D. B.; Caggiano, J. A.

    2015-11-14

    Neutron time-of-flight diagnostics have long been used to characterize the neutron spectrum produced by inertial confinement fusion experiments. The primary diagnostic goals are to extract the d + t → n + α (DT) and d + d → n + {sup 3}He (DD) neutron yields and peak widths, and the amount DT scattering relative to its unscattered yield, also known as the down-scatter ratio (DSR). These quantities are used to infer yield weighted plasma conditions, such as ion temperature (T{sub ion}) and cold fuel areal density. We report on novel methodologies used to determine neutron yield, apparent T{sub ion}, and DSR. These methods invoke a single temperature,more » static fluid model to describe the neutron peaks from DD and DT reactions and a spline description of the DT spectrum to determine the DSR. Both measurements are performed using a forward modeling technique that includes corrections for line-of-sight attenuation and impulse response of the detection system. These methods produce typical uncertainties for DT T{sub ion} of 250 eV, 7% for DSR, and 9% for the DT neutron yield. For the DD values, the uncertainties are 290 eV for T{sub ion} and 10% for the neutron yield.« less

  8. Assessment of physical activity of the human body considering the thermodynamic system.

    PubMed

    Hochstein, Stefan; Rauschenberger, Philipp; Weigand, Bernhard; Siebert, Tobias; Schmitt, Syn; Schlicht, Wolfgang; Převorovská, Světlana; Maršík, František

    2016-01-01

    Correctly dosed physical activity is the basis of a vital and healthy life, but the measurement of physical activity is certainly rather empirical resulting in limited individual and custom activity recommendations. Certainly, very accurate three-dimensional models of the cardiovascular system exist, however, requiring the numeric solution of the Navier-Stokes equations of the flow in blood vessels. These models are suitable for the research of cardiac diseases, but computationally very expensive. Direct measurements are expensive and often not applicable outside laboratories. This paper offers a new approach to assess physical activity using thermodynamical systems and its leading quantity of entropy production which is a compromise between computation time and precise prediction of pressure, volume, and flow variables in blood vessels. Based on a simplified (one-dimensional) model of the cardiovascular system of the human body, we develop and evaluate a setup calculating entropy production of the heart to determine the intensity of human physical activity in a more precise way than previous parameters, e.g. frequently used energy considerations. The knowledge resulting from the precise real-time physical activity provides the basis for an intelligent human-technology interaction allowing to steadily adjust the degree of physical activity according to the actual individual performance level and thus to improve training and activity recommendations.

  9. Recombinant plasmid-based quantitative Real-Time PCR analysis of Salmonella enterica serotypes and its application to milk samples.

    PubMed

    Gokduman, Kurtulus; Avsaroglu, M Dilek; Cakiris, Aris; Ustek, Duran; Gurakan, G Candan

    2016-03-01

    The aim of the current study was to develop, a new, rapid, sensitive and quantitative Salmonella detection method using a Real-Time PCR technique based on an inexpensive, easy to produce, convenient and standardized recombinant plasmid positive control. To achieve this, two recombinant plasmids were constructed as reference molecules by cloning the two most commonly used Salmonella-specific target gene regions, invA and ttrRSBC. The more rapid detection enabled by the developed method (21 h) compared to the traditional culture method (90 h) allows the quantitative evaluation of Salmonella (quantification limits of 10(1)CFU/ml and 10(0)CFU/ml for the invA target and the ttrRSBC target, respectively), as illustrated using milk samples. Three advantages illustrated by the current study demonstrate the potential of the newly developed method to be used in routine analyses in the medical, veterinary, food and water/environmental sectors: I--The method provides fast analyses including the simultaneous detection and determination of correct pathogen counts; II--The method is applicable to challenging samples, such as milk; III--The method's positive controls (recombinant plasmids) are reproducible in large quantities without the need to construct new calibration curves. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Self-recovery fragile watermarking algorithm based on SPHIT

    NASA Astrophysics Data System (ADS)

    Xin, Li Ping

    2015-12-01

    A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.

  11. Field Methods and Quality-Assurance Plan for Quality-of-Water Activities, U.S. Geological Survey, Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Knobel, LeRoy L.; Tucker, Betty J.; Rousseau, Joseph P.

    2008-01-01

    Water-quality activities conducted by the staff of the U.S. Geological Survey (USGS) Idaho National Laboratory (INL) Project Office coincide with the USGS mission of appraising the quantity and quality of the Nation's water resources. The activities are conducted in cooperation with the U.S. Department of Energy's (DOE) Idaho Operations Office. Results of the water-quality investigations are presented in various USGS publications or in refereed scientific journals. The results of the studies are highly regarded, and they are used with confidence by researchers, regulatory and managerial agencies, and interested civic groups. In its broadest sense, quality assurance refers to doing the job right the first time. It includes the functions of planning for products, review and acceptance of the products, and an audit designed to evaluate the system that produces the products. Quality control and quality assurance differ in that quality control ensures that things are done correctly given the 'state-of-the-art' technology, and quality assurance ensures that quality control is maintained within specified limits.

  12. Synthetic generation of influenza vaccine viruses for rapid response to pandemics.

    PubMed

    Dormitzer, Philip R; Suphaphiphat, Pirada; Gibson, Daniel G; Wentworth, David E; Stockwell, Timothy B; Algire, Mikkel A; Alperovich, Nina; Barro, Mario; Brown, David M; Craig, Stewart; Dattilo, Brian M; Denisova, Evgeniya A; De Souza, Ivna; Eickmann, Markus; Dugan, Vivien G; Ferrari, Annette; Gomila, Raul C; Han, Liqun; Judge, Casey; Mane, Sarthak; Matrosovich, Mikhail; Merryman, Chuck; Palladino, Giuseppe; Palmer, Gene A; Spencer, Terika; Strecker, Thomas; Trusheim, Heidi; Uhlendorff, Jennifer; Wen, Yingxia; Yee, Anthony C; Zaveri, Jayshree; Zhou, Bin; Becker, Stephan; Donabedian, Armen; Mason, Peter W; Glass, John I; Rappuoli, Rino; Venter, J Craig

    2013-05-15

    During the 2009 H1N1 influenza pandemic, vaccines for the virus became available in large quantities only after human infections peaked. To accelerate vaccine availability for future pandemics, we developed a synthetic approach that very rapidly generated vaccine viruses from sequence data. Beginning with hemagglutinin (HA) and neuraminidase (NA) gene sequences, we combined an enzymatic, cell-free gene assembly technique with enzymatic error correction to allow rapid, accurate gene synthesis. We then used these synthetic HA and NA genes to transfect Madin-Darby canine kidney (MDCK) cells that were qualified for vaccine manufacture with viral RNA expression constructs encoding HA and NA and plasmid DNAs encoding viral backbone genes. Viruses for use in vaccines were rescued from these MDCK cells. We performed this rescue with improved vaccine virus backbones, increasing the yield of the essential vaccine antigen, HA. Generation of synthetic vaccine seeds, together with more efficient vaccine release assays, would accelerate responses to influenza pandemics through a system of instantaneous electronic data exchange followed by real-time, geographically dispersed vaccine production.

  13. Geometric integrator for simulations in the canonical ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx; Sanders, David P., E-mail: dpsanders@ciencias.unam.mx; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    2016-08-28

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservationmore » of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.« less

  14. Analytic corrections to CFD heating predictions accounting for changes in surface catalysis

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Inger, George R.

    1996-01-01

    Integral boundary-layer solution techniques applicable to the problem of determining aerodynamic heating rates of hypersonic vehicles in the vicinity of stagnation points and windward centerlines are briefly summarized. A new approach for combining the insight afforded by integral boundary-layer analysis with comprehensive (but time intensive) computational fluid dynamic (CFD) flowfield solutions of the thin-layer Navier-Stokes equations is described. The approach extracts CFD derived quantities at the wall and at the boundary layer edge for inclusion in a post-processing boundary-layer analysis. It allows a designer at a workstation to address two questions, given a single CFD solution. (1) How much does the heating change for a thermal protection system with different catalytic properties than was used in the original CFD solution? (2) How does the heating change at the interface of two different TPS materials with an abrupt change in catalytic efficiency? The answer to the second question is particularly important, because abrupt changes from low to high catalytic efficiency can lead to localized increase in heating which exceeds the usually conservative estimate provided by a fully catalytic wall assumption.

  15. A Proposed Time Transfer Experiment Between the USA and the South Pacific

    DTIC Science & Technology

    1991-12-01

    1 nanosecond, The corrected position will be traris~nitted by both the time transfer modem and the existing TV line sync dissemination process...communications satellite (AUSSAT K1) (Figure 5), With after-the- fact ephemeris correction , this is useful to the 20 nanosecond level. The second...spheric corrections will ultimately reduce ephemeris related time transfer errors to the 1 nanosecond level. The corrected position will be transmitted

  16. Integrated Doppler Correction to TWSTFT Using Round-Trip Measurement

    DTIC Science & Technology

    2010-11-01

    42 nd Annual Precise Time and Time Interval (PTTI) Meeting 251 INTEGRATED DOPPLER CORRECTION TO TWSTFT USING ROUND-TRIP MEASUREMENT Yi...Frequency Transfer ( TWSTFT ) data. It is necessary to correct the diurnal variation for comparing the time-scale difference. We focus on the up-/downlink...delay difference caused by satellite motion. In this paper, we propose to correct the TWSTFT data by using round-trip delay measurement. There are

  17. Aerobic digestion reduces the quantity of antibiotic resistance genes in residual municipal wastewater solids

    PubMed Central

    Burch, Tucker R.; Sadowsky, Michael J.; LaPara, Timothy M.

    2012-01-01

    Numerous initiatives have been undertaken to circumvent the problem of antibiotic resistance, including the development of new antibiotics, the use of narrow spectrum antibiotics, and the reduction of inappropriate antibiotic use. We propose an alternative but complimentary approach to reduce antibiotic resistant bacteria (ARB) by implementing more stringent technologies for treating municipal wastewater, which is known to contain large quantities of ARB and antibiotic resistance genes (ARGs). In this study, we investigated the ability of conventional aerobic digestion to reduce the quantity of ARGs in untreated wastewater solids. A bench-scale aerobic digester was fed untreated wastewater solids collected from a full-scale municipal wastewater treatment facility. The reactor was operated under semi-continuous flow conditions for more than 200 days at a residence time of approximately 40 days. During this time, the quantities of tet(A), tet(W), and erm(B) decreased by more than 90%. In contrast, intI1 did not decrease, and tet(X) increased in quantity by 5-fold. Following operation in semi-continuous flow mode, the aerobic digester was converted to batch mode to determine the first-order decay coefficients, with half-lives ranging from as short as 2.8 days for tet(W) to as long as 6.3 days for intI1. These results demonstrated that aerobic digestion can be used to reduce the quantity of ARGs in untreated wastewater solids, but that rates can vary substantially depending on the reactor design (i.e., batch vs. continuous-flow) and the specific ARG. PMID:23407455

  18. Aerobic digestion reduces the quantity of antibiotic resistance genes in residual municipal wastewater solids.

    PubMed

    Burch, Tucker R; Sadowsky, Michael J; Lapara, Timothy M

    2013-01-01

    Numerous initiatives have been undertaken to circumvent the problem of antibiotic resistance, including the development of new antibiotics, the use of narrow spectrum antibiotics, and the reduction of inappropriate antibiotic use. We propose an alternative but complimentary approach to reduce antibiotic resistant bacteria (ARB) by implementing more stringent technologies for treating municipal wastewater, which is known to contain large quantities of ARB and antibiotic resistance genes (ARGs). In this study, we investigated the ability of conventional aerobic digestion to reduce the quantity of ARGs in untreated wastewater solids. A bench-scale aerobic digester was fed untreated wastewater solids collected from a full-scale municipal wastewater treatment facility. The reactor was operated under semi-continuous flow conditions for more than 200 days at a residence time of approximately 40 days. During this time, the quantities of tet(A), tet(W), and erm(B) decreased by more than 90%. In contrast, intI1 did not decrease, and tet(X) increased in quantity by 5-fold. Following operation in semi-continuous flow mode, the aerobic digester was converted to batch mode to determine the first-order decay coefficients, with half-lives ranging from as short as 2.8 days for tet(W) to as long as 6.3 days for intI1. These results demonstrated that aerobic digestion can be used to reduce the quantity of ARGs in untreated wastewater solids, but that rates can vary substantially depending on the reactor design (i.e., batch vs. continuous-flow) and the specific ARG.

  19. Need for optimal body composition data analysis using air-displacement plethysmography in children and adolescents.

    PubMed

    Bosy-Westphal, Anja; Danielzik, Sandra; Becker, Christine; Geisler, Corinna; Onur, Simone; Korth, Oliver; Bührens, Frederike; Müller, Manfred J

    2005-09-01

    Air-displacement plethysmography (ADP) is now widely used for body composition measurement in pediatric populations. However, the manufacturer's software developed for adults leaves a potential bias for application in children and adolescents, and recent publications do not consistently use child-specific corrections. Therefore we analyzed child-specific ADP corrections with respect to quantity and etiology of bias compared with adult formulas. An optimal correction protocol is provided giving step-by-step instructions for calculations. In this study, 258 children and adolescents (143 girls and 115 boys ranging from 5 to 18 y) with a high prevalence of overweight or obesity (28.0% in girls and 22.6% in boys) were examined by ADP applying the manufacturer's software as well as published equations for child-specific corrections for surface area artifact (SAA), thoracic gas volume (TGV), and density of fat-free mass (FFM). Compared with child-specific equations for SAA, TGV, and density of FFM, the mean overestimation of the percentage of fat mass using the manufacturer's software was 10% in children and adolescents. Half of the bias derived from the use of Siri's equation not corrected for age-dependent differences in FFM density. An additional 3 and 2% of bias resulted from the application of adult equations for prediction of SAA and TGV, respectively. Different child-specific equations used to predict TGV did not differ in the percentage of fat mass. We conclude that there is a need for child-specific equations in ADP raw data analysis considering SAA, TGV, and density of FFM.

  20. SeaWinds Scatterometer Wind Vector Retrievals Within Hurricanes Using AMSR and NEXRAD to Perform Corrections for Precipitation Effects: Comparison of AMSR and NEXRAD Retrievals of Rain

    NASA Technical Reports Server (NTRS)

    Weissman, David E.; Hristova-Veleva, Svetla; Callahan, Philip

    2006-01-01

    The opportunity provided by satellite scatterometers to measure ocean surface winds in strong storms and hurricanes is diminished by the errors in the received backscatter (SIGMA-0) caused by the attenuation, scattering and surface roughening produced by heavy rain. Providing a good rain correction is a very challenging problem, particularly at Ku band (13.4 GHz) where rain effects are strong. Corrections to the scatterometer measurements of ocean surface winds can be pursued with either of two different methods: empirical or physical modeling. The latter method is employed in this study because of the availability of near simultaneous and collocated measurements provided by the MIDORI-II suite of instruments. The AMSR was designed to measure atmospheric water-related parameters on a spatial scale comparable to the SeaWinds scatterometer. These quantities can be converted into volumetric attenuation and scattering at the Ku-band frequency of SeaWinds. Optimal estimates of the volume backscatter and attenuation require a knowledge of the three dimensional distribution of reflectivity on a scale comparable to that of the precipitation. Studies selected near the US coastline enable the much higher resolution NEXRAD reflectivity measurements evaluate the AMSR estimates. We are also conducting research into the effects of different beam geometries and nonuniform beamfilling of precipitation within the field-of-view of the AMSR and the scatterometer. Furthermore, both AMSR and NEXRAD estimates of atmospheric correction can be used to produce corrected SIGMA-0s, which are then input to the JPL wind retrieval algorithm.

  1. Study of the GPS inter-frequency calibration of timing receivers

    NASA Astrophysics Data System (ADS)

    Defraigne, P.; Huang, W.; Bertrand, B.; Rovera, D.

    2018-02-01

    When calibrating Global Positioning System (GPS) stations dedicated to timing, the hardware delays of P1 and P2, the P(Y)-codes on frequencies L1 and L2, are determined separately. In the international atomic time (TAI) network the GPS stations of the time laboratories are calibrated relatively against reference stations. This paper aims at determining the consistency between the P1 and P2 hardware delays (called dP1 and dP2) of these reference stations, and to look at the stability of the inter-signal hardware delays dP1-dP2 of all the stations in the network. The method consists of determining the dP1-dP2 directly from the GPS pseudorange measurements corrected for the frequency-dependent antenna phase center and the frequency-dependent ionosphere corrections, and then to compare these computed dP1-dP2 to the calibrated values. Our results show that the differences between the computed and calibrated dP1-dP2 are well inside the expected combined uncertainty of the two quantities. Furthermore, the consistency between the calibrated time transfer solution obtained from either single-frequency P1 or dual-frequency P3 for reference laboratories is shown to be about 1.0 ns, well inside the 2.1 ns uB uncertainty of a time transfer link based on GPS P3 or Precise Point Positioning. This demonstrates the good consistency between the P1 and P2 hardware delays of the reference stations used for calibration in the TAI network. The long-term stability of the inter-signal hardware delays is also analysed from the computed dP1-dP2. It is shown that only variations larger than 2 ns can be detected for a particular station, while variations of 200 ps can be detected when differentiating the results between two stations. Finally, we also show that in the differential calibration process as used in the TAI network, using the same antenna phase center or using different positions for L1 and L2 signals gives maximum differences of 200 ps on the hardware delays of the separate codes P1 and P2; however, the final impact on the P3 combination is less than 10 ps.

  2. Development of a real-time wave field reconstruction TEM system (II): correction of coma aberration and 3-fold astigmatism, and real-time correction of 2-fold astigmatism.

    PubMed

    Tamura, Takahiro; Kimura, Yoshihide; Takai, Yoshizo

    2018-02-01

    In this study, a function for the correction of coma aberration, 3-fold astigmatism and real-time correction of 2-fold astigmatism was newly incorporated into a recently developed real-time wave field reconstruction TEM system. The aberration correction function was developed by modifying the image-processing software previously designed for auto focus tracking, as described in the first article of this series. Using the newly developed system, the coma aberration and 3-fold astigmatism were corrected using the aberration coefficients obtained experimentally before the processing was carried out. In this study, these aberration coefficients were estimated from an apparent 2-fold astigmatism induced under tilted-illumination conditions. In contrast, 2-fold astigmatism could be measured and corrected in real time from the reconstructed wave field. Here, the measurement precision for 2-fold astigmatism was found to be ±0.4 nm and ±2°. All of these aberration corrections, as well as auto focus tracking, were performed at a video frame rate of 1/30 s. Thus, the proposed novel system is promising for quantitative and reliable in situ observations, particularly in environmental TEM applications.

  3. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch

    We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less

  5. Proceedings of the First International Symposium on the Interaction of Non-Nuclear Munitions with Structures: Held at U.S. Air Force Academy, Colorado on May 10-13, 1983. Part 1.

    DTIC Science & Technology

    1983-05-01

    orre mosdismiel which uses& that pre-lictioa ram be maim umima eel Us have been developing a general solution for self -caoaiteot set of metric or...meet pert, analytical Stu-si dies of the Somliear respose of reio- forced costrate structures have bat At present, multi-dimemslosel aa focused, by s...quantities is avail- able and the applications are limited in 6 this respect. However, the entire develop- ment is " self -correcting" in the sense that 4 as

  6. Nutrition and orthomolecular supplementation in lung cancer patients.

    PubMed

    Campos, Diana; Austerlitz, Carlos; Allison, Ron R; Póvoa, Helion; Sibata, Claudio

    2009-12-01

    This article reviews updates and provides some data related to nutritional and orthomolecular supplementation in oncology patients with an emphasis on lung cancer, a commonly diagnosed tumor with significant nutritional disturbances. Cancer and its treatment play a significant role in nutritional imbalance which likely has negative impact on the patient both in terms of quality and quantity of life. Nutritional supplementation may correct these imbalances with significant clinical benefit both physiologically and psychologically. This review will help assist in providing clinically useful data to assess the cancer patient's nutritional status and to guide nutritional intervention to assist these patients' recovery.

  7. [Surgery of the breast on transgender persons].

    PubMed

    Karhunen-Enckell, Ulla; Kolehmainen, Maija; Kääriäinen, Minna; Suominen, Sinikka

    2015-01-01

    For a female-to-male transgender person, mastectomy is the most important procedure making the social interaction easier. Along with the size of the breasts, the quantity and quality of skin will influence the selection of surgical technique. Although complications are rare, corrective surgery is performed for as many as 40% of the patients. Of male-to-female transsexual persons, 60 to 70% opt for breast enlargement. Breast enlargement can be carried out by using either silicone implants or fat transplantation. Since the surgical procedures on breasts are irreversible, their implementation requires confirmation of the diagnosis of transsexualism by a multidisciplinary team.

  8. NUCLEAR RADIATION DOSIMETER USING COMPOSITE FILTER AND A SINGLE ELEMENT FILTER

    DOEpatents

    Storm, E.; Shlaer, S.

    1964-04-21

    A nuclear radiation dosimeter is described that uses, in combination, a composite filter and a single element filter. The composite filter contains a plurality of comminuted metals having K-edges evenly distributed over the energy range of interest and the quantity of each of the metals is selected to result in filtering in an amount inversely proportional to the sensitivity of the film in the range over l00 kev. A copper filter is used that has a thickness to contribute the necessary additional correction in the interval between 40 and 100 kev. (AEC)

  9. Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.

    PubMed

    Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A

    2013-11-01

    We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

  10. Who sleeps best? Longitudinal patterns and covariates of change in sleep quantity, quality, and timing across four university years.

    PubMed

    Galambos, Nancy L; Vargas Lascano, Dayuma I; Howard, Andrea L; Maggs, Jennifer L

    2013-01-01

    This study tracked change over time in sleep quantity, disturbance, and timing, and sleep's covariations with living situation, stress, social support, alcohol use, and grade point average (GPA) across four years of university in 186 Canadian students. Women slept longer as they moved through university, and men slept less; rise times were later each year. Students reported sleeping fewer hours, more sleep disturbances, and later rise times during years with higher stress. In years when students lived away from home, they reported more sleep disturbances, later bedtimes, and later rise times. Living on campus was associated with later bedtimes and rise times. Alcohol use was higher and GPA was lower when bedtimes were later. The implications of these observed patterns for understanding the correlates and consequences of university students' sleep are discussed.

  11. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    NASA Astrophysics Data System (ADS)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  12. Prospective change control analysis of transfer of platelet concentrate production from a specialized stem cell transplantation unit to a blood transfusion center.

    PubMed

    Sigle, Joerg-Peter; Medinger, Michael; Stern, Martin; Infanti, Laura; Heim, Dominik; Halter, Joerg; Gratwohl, Alois; Buser, Andreas

    2012-01-01

    Specialized centers claim a need for blood component production independent from the general blood transfusion services. We performed a prospective change control analysis of the transfer of platelet (PLT) production for hematological patients at the University Hospital Basel from the Department of Hematology to the Blood Transfusion Centre, Swiss Red Cross, Basel in February 2006. We wanted to demonstrate that neither quality nor transfusion outcome was affected. Production quantity and efficiency, product quality and transfusion outcome were systematically recorded. A 2-year pretransfer period was compared to a 2 year post-transfer period. After transfer production quantity at the Blood Transfusion Centre increased from 4,483 to 6,190 PLT concentrates. Production efficiency increased with a significant decrease in the rate of expired products (18% vs. 8%; P < 0.001). Product quality showed a slight decrease in median PLT count per unit (2.84 vs. 2.75 × 10(11); P < 0.001) and a slight increase in mean storage time prior to transfusion (3.18 vs. 3.30 days; P < 0.001). Transfusion outcome measured as median corrected count increment one hour post-transfusion (10.5 vs. 10.7; P = 0.3) and the rate of patients with inadequate post-transfusion increment (31.5% vs. 32.1%; P = 0.6) did not differ. Supply and quality of PLT products was maintained after the transfer of PLT production to the Blood Transfusion Centre. An optimization of the supply chain process with markedly decreased expiration rates was achieved. These results argue against the need of specialized PLT production sites for selected patient groups. Copyright © 2012 Wiley Periodicals, Inc.

  13. A Cross-Layer, Anomaly-Based IDS for WSN and MANET

    PubMed Central

    Amouri, Amar; Manthena, Raju

    2018-01-01

    Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks. PMID:29470446

  14. Eating habits and physical activity of adolescents in Katowice--the teenagers' declarations vs. their parents' beliefs.

    PubMed

    Bąk-Sosnowska, Monika; Skrzypulec-Plinta, Violetta

    2012-09-01

    To analyse eating and physical activity preferences among adolescent school children and to compare the teenagers' lifestyle declarations with their parents' beliefs. Unfavorable behavior in eating habits and physical activity may result in serious dysfunctions and diseases, such as eating disorders and incorrect body mass. A retrospective cross-sectional study conducted in 2010-2011. The data was collated from 711 pupils and 266 parents. The survey included questions on: breakfast consumption, types of food eaten for breakfast, time of supper, the daily number of meals, the quantity of fruit and vegetables, food products purchased in the school shop, as well as the type and level of physical activity. In the population of children aged 14-15 years, 10% do not eat 1st breakfast and 15% do not eat 2nd breakfast, 50% eat dairy products for 1st breakfast, 70% have sandwiches for 2nd breakfast, 45% most frequently buy snacks in the school shop, 65% prefer physical activity in the form of team games, and 90% willingly participate in PE classes. The parents' beliefs differ from their children's declarations with regard to: breakfast consumption, the number of meals a day, the quantity of fruit, and participation in PE classes. The lifestyle of the studied adolescents is within the norms recommended for their age group, although there is a tendency to skip breakfast. A positive aspect is the adolescents' engagement in physical activity. Parents underestimate their children's level of physical activity and overestimate their daily number of meals. The study confirms the validity of conducting health education, addressed to both children and their parents, with regard to correct eating habits and physical activity, as well as prevention of eating disorders. © 2012 Blackwell Publishing Ltd.

  15. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  16. A Cross-Layer, Anomaly-Based IDS for WSN and MANET.

    PubMed

    Amouri, Amar; Morgera, Salvatore D; Bencherif, Mohamed A; Manthena, Raju

    2018-02-22

    Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks.

  17. Identifying inaccuracy of MS Project using system analysis

    NASA Astrophysics Data System (ADS)

    Fachrurrazi; Husin, Saiful; Malahayati, Nurul; Irzaidi

    2018-05-01

    The problem encountered in project owner’s financial accounting report is the difference in total project costs of MS Project to the Indonesian Standard (Standard Indonesia Standard / Cost Estimating Standard Book of Indonesia). It is one of the MS Project problems concerning to its cost accuracy, so cost data cannot be used in an integrated way for all project components. This study focuses on finding the causes of inaccuracy of the MS Projects. The aim of this study, which is operationally, are: (i) identifying cost analysis procedures for both current methods (SNI) and MS Project; (ii) identifying cost bias in each element of the cost analysis procedure; and (iii) analysing the cost differences (cost bias) in each element to identify what the cause of inaccuracies in MS Project toward SNI is. The method in this study is comparing for both the system analysis of MS Project and SNI. The results are: (i) MS Project system in Work of Resources element has limitation for two decimal digits only, have led to its inaccuracy. Where the Work of Resources (referred to as effort) in MS Project represents multiplication between the Quantities of Activities and Requirements of resources in SNI; (ii) MS Project and SNI have differences in the costing methods (the cost estimation methods), in which the SNI uses the Quantity-Based Costing (QBC), meanwhile MS Project uses the Time-Based Costing (TBC). Based on this research, we recommend to the contractors who use SNI should make an adjustment for Work of Resources in MS Project (with correction index) so that it can be used in an integrated way to the project owner’s financial accounting system. Further research will conduct for improvement the MS Project as an integrated tool toward all part of the project participant.

  18. New inverse synthetic aperture radar algorithm for translational motion compensation

    NASA Astrophysics Data System (ADS)

    Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.

    1991-10-01

    Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.

  19. Detrended cross-correlations between returns, volatility, trading activity, and volume traded for the stock market companies

    NASA Astrophysics Data System (ADS)

    Rak, Rafał; Drożdż, Stanisław; Kwapień, Jarosław; Oświȩcimka, Paweł

    2015-11-01

    We consider a few quantities that characterize trading on a stock market in a fixed time interval: logarithmic returns, volatility, trading activity (i.e., the number of transactions), and volume traded. We search for the power-law cross-correlations among these quantities aggregated over different time units from 1 min to 10 min. Our study is based on empirical data from the American stock market consisting of tick-by-tick recordings of 31 stocks listed in Dow Jones Industrial Average during the years 2008-2011. Since all the considered quantities except the returns show strong daily patterns related to the variable trading activity in different parts of a day, which are the most evident in the autocorrelation function, we remove these patterns by detrending before we proceed further with our study. We apply the multifractal detrended cross-correlation analysis with sign preserving (MFCCA) and show that the strongest power-law cross-correlations exist between trading activity and volume traded, while the weakest ones exist (or even do not exist) between the returns and the remaining quantities. We also show that the strongest cross-correlations are carried by those parts of the signals that are characterized by large and medium variance. Our observation that the most convincing power-law cross-correlations occur between trading activity and volume traded reveals the existence of strong fractal-like coupling between these quantities.

  20. Growing Large Quantities of Containerized Seedlings

    Treesearch

    Tim Pittman

    2002-01-01

    The sowing of large quantities of longleaf pine (Pinus palustris Mill.) seed into trays depends on the quality of the seed and the timing of seed sowing. This can be accomplished with mechanization. Seed quality is accomplished by using a gravity table. Tray filling can be accomplished by using a ribbon-type soil mixer and an automated tray-filling...

  1. Quantity is nothing without quality: automated QA/QC for streaming sensor networks

    Treesearch

    John L. Campbell; Lindsey E. Rustad; John H. Porter; Jeffrey R. Taylor; Ethan W. Dereszynski; James B. Shanley; Corinna Gries; Donald L. Henshaw; Mary E. Martin; Wade. M. Sheldon; Emery R. Boose

    2013-01-01

    Sensor networks are revolutionizing environmental monitoring by producing massive quantities of data that are being made publically available in near real time. These data streams pose a challenge for ecologists because traditional approaches to quality assurance and quality control are no longer practical when confronted with the size of these data sets and the...

  2. Signal Clarity: An Account of the Variability in Infant Quantity Discrimination Tasks

    ERIC Educational Resources Information Center

    Cantrell, Lisa; Boyer, Ty W.; Cordes, Sara; Smith, Linda B.

    2015-01-01

    Infants have shown variable success in quantity comparison tasks, with infants of a given age sometimes successfully discriminating numerical differences at a 2:3 ratio but requiring 1:2 and even 1:4 ratios of change at other times. The current explanations for these variable results include the two-systems proposal--a theoretical framework that…

  3. Bias correction of satellite-based rainfall data

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Biswa; Solomatine, Dimitri

    2015-04-01

    Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall

  4. Detecting discontinuities in GNSS coordinate time series with STARS: case study, the Bologna and Medicina GPS sites

    NASA Astrophysics Data System (ADS)

    Bruni, S.; Zerbini, Susanna; Raicich, F.; Errico, M.; Santi, E.

    2014-12-01

    Global navigation satellite systems (GNSS) data are a fundamental source of information for achieving a better understanding of geophysical and climate-related phenomena. However, discontinuities in the coordinate time series might be a severe limiting factor for the reliable estimate of long-term trends. A methodological approach has been adapted from Rodionov (Geophys Res Lett 31:L09204, 2004; Geophys Res Lett 31:L12707, 2006) and from Rodionov and Overland (J Marine Sci 62:328-332, 2005) to identify both the epoch of occurrence and the magnitude of jumps corrupting GNSS data sets without any a priori information on these quantities. The procedure is based on the Sequential t test Analysis of Regime Shifts (STARS) (Rodionov in Geophys Res Lett 31:L09204, 2004). The method has been tested against a synthetic data set characterized by typical features exhibited by real GNSS time series, such as linear trend, seasonal cycle, jumps, missing epochs and a combination of white and flicker noise. The results show that the offsets identified by the algorithm are split into 48 % of true-positive, 28 % of false-positive and 24 % of false-negative events. The procedure has then been applied to GPS coordinate time series of stations located in the southeastern Po Plain, in Italy. The series span more than 15 years and are affected by offsets of different nature. The methodology proves to be effective, as confirmed by the comparison between the corrected GPS time series and those obtained by other observation techniques.

  5. Platinum/Tin Oxide/Silica Gel Catalyst Oxidizes CO

    NASA Technical Reports Server (NTRS)

    Upchurch, Billy T.; Davis, Patricia P.; Schryer, David R.; Miller, Irvin M.; Brown, David; Van Norman, John D.; Brown, Kenneth G.

    1991-01-01

    Heterogeneous catalyst of platinum, tin oxide, and silica gel combines small concentrations of laser dissociation products, CO and O2, to form CO22 during long times at ambient temperature. Developed as means to prevent accumulation of these products in sealed CO2 lasers. Effective at ambient operating temperatures and installs directly in laser envelope. Formulated to have very high surface area and to chemisorb controlled quantities of moisture: chemisorbed water contained within and upon its structure, makes it highly active and very longlived so only small quantity needed for long times.

  6. 75 FR 53687 - Pacific Gas and Electric Company, California; Notice Correcting Times for Public Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-01

    ... Electric Company, California; Notice Correcting Times for Public Draft Environmental Impact Statement... time for the morning meeting as 9 a.m.-11 p.m.. This notice corrects that error to indicate the meeting is from 9 a.m.- 11 a.m. The time and location of the meetings are as follows: Morning Meeting: Date...

  7. High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system

    NASA Astrophysics Data System (ADS)

    Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng

    2017-09-01

    Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.

  8. The acute response of pericytes to muscle-damaging eccentric contraction and protein supplementation in human skeletal muscle.

    PubMed

    De Lisio, Michael; Farup, Jean; Sukiennik, Richard A; Clevenger, Nicole; Nallabelli, Julian; Nelson, Brett; Ryan, Kelly; Rahbek, Stine K; de Paoli, Frank; Vissing, Kristian; Boppart, Marni D

    2015-10-15

    Skeletal muscle pericytes increase in quantity following eccentric exercise (ECC) and contribute to myofiber repair and adaptation in mice. The purpose of the present investigation was to examine pericyte quantity in response to muscle-damaging ECC and protein supplementation in human skeletal muscle. Male subjects were divided into protein supplement (WHY; n = 12) or isocaloric placebo (CHO; n = 12) groups and completed ECC using an isokinetic dynamometer. Supplements were consumed 3 times/day throughout the experimental time course. Biopsies were collected prior to (PRE) and 3, 24, 48, and 168 h following ECC. Reflective of the damaging protocol, integrin subunits, including α7, β1A, and β1D, increased (3.8-fold, 3.6-fold and 3.9-fold, respectively, P < 0.01) 24 h post-ECC with no difference between supplements. Pericyte quantity did not change post-ECC. WHY resulted in a small, but significant, decrease in ALP(+) pericytes when expressed as a percentage of myonuclei (CHO 6.8 ± 0.3% vs. WHY 5.8 ± 0.3%, P < 0.05) or per myofiber (CHO 0.119 ± 0.01 vs. WHY 0.098 ± 0.01, P < 0.05). The quantity of myonuclei expressing serum response factor and the number of pericytes expressing serum response factor, did not differ as a function of time post-ECC or supplement. These data demonstrate that acute muscle-damaging ECC increases α7β1 integrin content in human muscle, yet pericyte quantity is largely unaltered. Future studies should focus on the capacity for ECC to influence pericyte function, specifically paracrine factor release as a mechanism toward pericyte contribution to repair and adaptation postexercise. Copyright © 2015 the American Physiological Society.

  9. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vollick, Dan N.

    In recent papers [D. N. Vollick, Phys. Rev. D 68, 063510 (2003)][D. N. Vollick, Classical Quantum Gravity 21, 3813 (2004).] I have argued that the observed cosmological acceleration can be accounted for by the inclusion of a 1/R term in the gravitational action in the Palatini formalism. Subsequently, Flanagan [Phys. Rev. Lett. 92, 071101 (2004)][Class. Quant. Grav. 21, 3817 (2004)] argued that this theory is equivalent to a scalar-tensor theory which produces corrections to the standard model that are ruled out experimentally. In this article I examine the Dirac field coupled to 1/R gravity. The Dirac action contains the connectionmore » which was taken to be the Christoffel symbol, not an independent quantity, in the papers by Flanagan. Since the metric and connection are taken to be independent in the Palatini approach it is natural to allow the connection that appears in the Dirac action to be an independent quantity. This is the approach that is taken in this paper. The resulting theory is very different and much more complicated than the one discussed in Flanagan's papers.« less

  11. A Nutrient Combination that Can Affect Synapse Formation

    PubMed Central

    Wurtman, Richard J.

    2014-01-01

    Brain neurons form synapses throughout the life span. This process is initiated by neuronal depolarization, however the numbers of synapses thus formed depend on brain levels of three key nutrients—uridine, the omega-3 fatty acid DHA, and choline. Given together, these nutrients accelerate formation of synaptic membrane, the major component of synapses. In infants, when synaptogenesis is maximal, relatively large amounts of all three nutrients are provided in bioavailable forms (e.g., uridine in the UMP of mothers’ milk and infant formulas). However, in adults the uridine in foods, mostly present at RNA, is not bioavailable, and no food has ever been compelling demonstrated to elevate plasma uridine levels. Moreover, the quantities of DHA and choline in regular foods can be insufficient for raising their blood levels enough to promote optimal synaptogenesis. In Alzheimer’s disease (AD) the need for extra quantities of the three nutrients is enhanced, both because their basal plasma levels may be subnormal (reflecting impaired hepatic synthesis), and because especially high brain levels are needed for correcting the disease-related deficiencies in synaptic membrane and synapses. PMID:24763080

  12. STR melting curve analysis as a genetic screening tool for crime scene samples.

    PubMed

    Nguyen, Quang; McKinney, Jason; Johnson, Donald J; Roberts, Katherine A; Hardy, Winters R

    2012-07-01

    In this proof-of-concept study, high-resolution melt curve (HRMC) analysis was investigated as a postquantification screening tool to discriminate human CSF1PO and THO1 genotypes amplified with mini-STR primers in the presence of SYBR Green or LCGreen Plus dyes. A total of 12 CSF1PO and 11 HUMTHO1 genotypes were analyzed on the LightScanner HR96 and LS-32 systems and were correctly differentiated based upon their respective melt profiles. Short STR amplicon melt curves were affected by repeat number, and single-source and mixed DNA samples were additionally differentiated by the formation of heteroduplexes. Melting curves were shown to be unique and reproducible from DNA quantities ranging from 20 to 0.4 ng and distinguished identical from nonidentical genotypes from DNA derived from different biological fluids and compromised samples. Thus, a method is described which can assess both the quantity and the possible probative value of samples without full genotyping. 2012 American Academy of Forensic Sciences. Published 2012. This article is a U.S. Government work and is in the public domain in the U.S.A.

  13. Dead-time Corrected Disdrometer Data

    DOE Data Explorer

    Bartholomew, Mary Jane

    2008-03-05

    Original and dead-time corrected disdrometer results for observations made at SGP and TWP. The correction is based on the technique discussed in Sheppard and Joe, 1994. In addition, these files contain calculated radar reflectivity factor, mean Doppler velocity and attenuation for every measurement for both the original and dead-time corrected data at the following wavelengths: 0.316, 0.856, 3.2, 5, and 10cm (W,K,X,C,S bands). Pavlos Kollias provided the code to do these calculations.

  14. A study of the use of abstract types for the representation of engineering units in integration and test applications

    NASA Technical Reports Server (NTRS)

    Johnson, Charles S.

    1986-01-01

    Physical quantities using various units of measurement can be well represented in Ada by the use of abstract types. Computation involving these quantities (electric potential, mass, volume) can also automatically invoke the computation and checking of some of the implicitly associable attributes of measurements. Quantities can be held internally in SI units, transparently to the user, with automatic conversion. Through dimensional analysis, the type of the derived quantity resulting from a computation is known, thereby allowing dynamic checks of the equations used. The impact of the possible implementation of these techniques in integration and test applications is discussed. The overhead of computing and transporting measurement attributes is weighed against the advantages gained by their use. The construction of a run time interpreter using physical quantities in equations can be aided by the dynamic equation checks provided by dimensional analysis. The effects of high levels of abstraction on the generation and maintenance of software used in integration and test applications are also discussed.

  15. Decay of homogeneous turbulence from a specified state

    NASA Technical Reports Server (NTRS)

    Deissler, R. G.

    1972-01-01

    The homogeneous turbulence problem is formulated by first specifying the multipoint velocity correlations or their spectral equivalents at an initial time. Those quantities, together with the correlation or spectral equations, are then used to calculate initial time derivatives of correlations or spectra. The derivatives in turn are used in time series to calculate the evolution of turbulence quantities with time. When the problem is treated in this way, the correlation equations are closed by the initial specification of the turbulence and no closure assumption is necessary. An exponential series which is an iterative solution of the Navier stokes equations gave much better results than a Taylor power series when used with the limited available initial data. In general, the agreement between theory and experiment was good.

  16. Optimization of preservation and storage time of sponge tissues to obtain quality mRNA for next-generation sequencing.

    PubMed

    Riesgo, Ana; Pérez-Porro, Alicia R; Carmona, Susana; Leys, Sally P; Giribet, Gonzalo

    2012-03-01

    Transcriptome sequencing with next-generation sequencing technologies has the potential for addressing many long-standing questions about the biology of sponges. Transcriptome sequence quality depends on good cDNA libraries, which requires high-quality mRNA. Standard protocols for preserving and isolating mRNA often require optimization for unusual tissue types. Our aim was assessing the efficiency of two preservation modes, (i) flash freezing with liquid nitrogen (LN₂) and (ii) immersion in RNAlater, for the recovery of high-quality mRNA from sponge tissues. We also tested whether the long-term storage of samples at -80 °C affects the quantity and quality of mRNA. We extracted mRNA from nine sponge species and analysed the quantity and quality (A260/230 and A260/280 ratios) of mRNA according to preservation method, storage time, and taxonomy. The quantity and quality of mRNA depended significantly on the preservation method used (LN₂) outperforming RNAlater), the sponge species, and the interaction between them. When the preservation was analysed in combination with either storage time or species, the quantity and A260/230 ratio were both significantly higher for LN₂-preserved samples. Interestingly, individual comparisons for each preservation method over time indicated that both methods performed equally efficiently during the first month, but RNAlater lost efficiency in storage times longer than 2 months compared with flash-frozen samples. In summary, we find that for long-term preservation of samples, flash freezing is the preferred method. If LN₂ is not available, RNAlater can be used, but mRNA extraction during the first month of storage is advised. © 2011 Blackwell Publishing Ltd.

  17. How the reference values for serum parathyroid hormone concentration are (or should be) established?

    PubMed

    Souberbielle, J-C; Brazier, F; Piketty, M-L; Cormier, C; Minisola, S; Cavalier, E

    2017-03-01

    Well-validated reference values are necessary for a correct interpretation of a serum PTH concentration. Establishing PTH reference values needs recruiting a large reference population. Exclusion criteria for this population can be defined as any situation possibly inducing an increase or a decrease in PTH concentration. As recommended in the recent guidelines on the diagnosis and management of asymptomatic primary hyperparathyroidism, PTH reference values should be established in vitamin D-replete subjects with a normal renal function with possible stratification according to various factors such as age, gender, menopausal status, body mass index, and race. A consensus about analytical/pre-analytical aspects of PTH measurement is also needed with special emphasis on the nature of the sample (plasma or serum), the time and the fasting/non-fasting status of the blood sample. Our opinion is that blood sample for PTH measurement should be obtained in the morning after an overnight fast. Furthermore, despite longer stability of the PTH molecule in EDTA plasma, we prefer serum as it allows to measure calcium, a prerequisite for a correct interpretation of a PTH concentration, on the same sample. Once a consensus is reached, we believe an important international multicentre work should be performed to recruit a very extensive reference population of apparently healthy vitamin D-replete subjects with a normal renal function in order to establish the PTH normative data. Due to the huge inter-method variability in PTH measurement, a sufficient quantity of blood sample should be obtained to allow measurement with as many PTH kits as possible.

  18. Forecasting quantities of disused household CRT appliances--a regional case study approach and its application to Baden-Württemberg.

    PubMed

    Walk, Wolfgang

    2009-02-01

    Due to special requirements regarding logistics and recycling, disused cathode ray tube (CRT) appliances are handled in some countries as a separate waste fraction. This article presents a forecast of future household waste CRT quantities based on the past and present equipment of households with television sets and computer monitors. Additional aspects taken into consideration are the product life time distribution and the ongoing change in display technology. Although CRT technology is fading out, the findings of this forecast show that quantities of waste CRT appliances will not decrease before 2012 in Baden-Württemberg, Germany. The results of this regional case study are not quantitatively transferable without further analysis. The method provided allows analysts to consider how the time shift between production and discard could impact recycling options, and the method could be valuable for future similar analyses elsewhere.

  19. Space-Time Data fusion for Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, H.; Cressie, N.

    2011-01-01

    NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.

  20. Estimation of the year-on-year volatility and the unpredictability of the United States energy system

    NASA Astrophysics Data System (ADS)

    Sherwin, Evan D.; Henrion, Max; Azevedo, Inês M. L.

    2018-04-01

    Long-term projections of energy consumption, supply and prices heavily influence decisions regarding long-lived energy infrastructure. Predicting the evolution of these quantities over multiple years to decades is a difficult task. Here, we estimate year-on-year volatility and unpredictability over multi-decade time frames for many quantities in the US energy system using historical projections. We determine the distribution over time of the most extreme projection errors (unpredictability) from 1985 to 2014, and the largest year-over-year changes (volatility) in the quantities themselves from 1949 to 2014. Our results show that both volatility and unpredictability have increased in the past decade, compared to the three and two decades before it. These findings may be useful for energy decision-makers to consider as they invest in and regulate long-lived energy infrastructure in a deeply uncertain world.

  1. Renormalization of the Higgs sector in the triplet model

    NASA Astrophysics Data System (ADS)

    Aoki, Mayumi; Kanemura, Shinya; Kikuchi, Mariko; Yagyu, Kei

    2012-08-01

    We study radiative corrections to the mass spectrum and the triple Higgs boson coupling in the model with an additional Y = 1 triplet field. In this model, the vacuum expectation value for the triplet field is strongly constrained from the electroweak precision data, under which characteristic mass spectrum appear at the tree level; i.e., mH++2 - mH+2 ≃ mH+2 - mA2 and mA2 ≃ mH2, where the CP-even (H), the CP-odd (A) and the doubly-charged (H±±) as well as the singly-charged (H±) Higgs bosons are the triplet-like. We evaluate how the tree-level formulae are modified at the one-loop level. The hhh coupling for the standard model-like Higgs boson (h) is also calculated at the one-loop level. One-loop corrections to these quantities can be large enough for identification of the model by future precision data at the LHC or the International Linear Collider.

  2. Delineation of The Sumatra Fault in The Central Part of West Sumatra based on Gravity Method

    NASA Astrophysics Data System (ADS)

    Saragih, R. D.; Brotopuspito, K. S.

    2018-04-01

    The Sumatra Fault System is elongated across the Sumatra Island, Indonesia, Southeast Asia including the central part of West Sumatra, Indonesia, Southeast Asia. The Sumatra Fault and subsurface structure on the Central Part of West Sumatra had been analyzed using gravity method. Bouguer anomaly data were obtained from GRDC (Geological Research and Development Centre) maps, Bandung, Indonesia (i.e. without terrain correction). In this study, terrain correction had been applied to these Bouguer data. Bouguer anomaly in a horizontal plane at 3000 meters high and equivalent depth of mass point 7000 meters were obtained using Dampney Method. Residual and regional anomalies were separated using upward continuation method at 8000 meters high. The result of the SVD on residual anomaly shows two negative anomalies on northwest – southeast. The zero miligal per meter square quantity coincides remarkably well with trace faults which is a part of the Sumatra Fault System. Two negative anomalies are located around the Sianok Segment and Sumani Segment.

  3. Charged hadrons in local finite-volume QED+QCD with C⋆ boundary conditions

    NASA Astrophysics Data System (ADS)

    Lucini, B.; Patella, A.; Ramos, A.; Tantalo, N.

    2016-02-01

    In order to calculate QED corrections to hadronic physical quantities by means of lattice simulations, a coherent description of electrically-charged states in finite volume is needed. In the usual periodic setup, Gauss's law and large gauge transformations forbid the propagation of electrically-charged states. A possible solution to this problem, which does not violate the axioms of local quantum field theory, has been proposed by Wiese and Polley, and is based on the use of C⋆ boundary conditions. We present a thorough analysis of the properties and symmetries of QED in isolation and QED coupled to QCD, with C⋆ boundary conditions. In particular we learn that a certain class of electrically-charged states can be constructed in a fully consistent fashion without relying on gauge fixing and without peculiar complications. This class includes single particle states of most stable hadrons. We also calculate finite-volume corrections to the mass of stable charged particles and show that these are much smaller than in non-local formulations of QED.

  4. Precision experiments on mirror transitions at Notre Dame

    NASA Astrophysics Data System (ADS)

    Brodeur, Maxime; TwinSol Collaboration

    2016-09-01

    Thanks to extensive experimental efforts that led to a precise determination of important experimental quantities of superallowed pure Fermi transitions, we now have a very precise value for Vud that leads to a stringent test of the CKM matrix unitarity. Despite this achievement, measurements in other systems remain relevant as conflicting results could uncover unknown systematic effects or even new physics. One such system is the superallowed mixed transition, which can help refine theoretical corrections used for pure Fermi transitions and improve the accuracy of Vud. However, as a corrected Ft-value determination from these systems requires the more challenging determination of the Fermi Gamow-Teller mixing ratio, only five transitions, spreading from 19Ne to 37Ar, are currently fully characterized. To rectify the situation, an experimental program on precision experiment of mirror transitions that includes precision half-life measurements, and in the future, the determination of the Fermi Gamow-Teller mixing ratio, has started at the University of Notre Dame. This work is supported in part by the National Science Foundation.

  5. Time-dependent phase error correction using digital waveform synthesis

    DOEpatents

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  6. Optimal control in adaptive optics modeling of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Herrmann, J.

    The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.

  7. Time-of-day Corrections to Aircraft Noise Metrics

    NASA Technical Reports Server (NTRS)

    Clevenson, S. (Editor); Shepherd, W. T. (Editor)

    1980-01-01

    The historical and background aspects of time-of-day corrections as well as the evidence supporting these corrections are discussed. Health, welfare, and economic impacts, needs a criteria, and government policy and regulation, are also reported.

  8. Quantification of the biocontrol agent Trichoderma harzianum with real-time TaqMan PCR and its potential extrapolation to the hyphal biomass.

    PubMed

    López-Mondéjar, Rubén; Antón, Anabel; Raidl, Stefan; Ros, Margarita; Pascual, José Antonio

    2010-04-01

    The species of the genus Trichoderma are used successfully as biocontrol agents against a wide range of phytopathogenic fungi. Among them, Trichoderma harzianum is especially effective. However, to develop more effective fungal biocontrol strategies in organic substrates and soil, tools for monitoring the control agents are required. Real-time PCR is potentially an effective tool for the quantification of fungi in environmental samples. The aim of this study consisted of the development and application of a real-time PCR-based method to the quantification of T. harzianum, and the extrapolation of these data to fungal biomass values. A set of primers and a TaqMan probe for the ITS region of the fungal genome were designed and tested, and amplification was correlated to biomass measurements obtained with optical microscopy and image analysis, of the hyphal length of the mycelium of the colony. A correlation of 0.76 between ITS copies and biomass was obtained. The extrapolation of the quantity of ITS copies, calculated based on real-time PCR data, into quantities of fungal biomass provides potentially a more accurate value of the quantity of soil fungi. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. The effects of mating status and time since mating on female sex pheromone levels in the rice leaf bug, Trigonotylus caelestialium

    NASA Astrophysics Data System (ADS)

    Yamane, Takashi; Yasuda, Tetsuya

    2014-02-01

    Although mating status affects future mating opportunities, the biochemical changes that occur in response to mating are not well understood. This study investigated the effects of mating status on the quantities of sex pheromone components found in whole-body extracts and volatile emissions of females of the rice leaf bug, Trigonotylus caelestialium. When sampled at one of four time points within a 4-day postmating period, females that had copulated with a male had greater whole-body quantities of sex pheromone components than those of virgin females sampled at the same times. The quantities of sex pheromone components emitted by virgin females over a 24-h period were initially high but then steadily decreased, whereas 24-h emissions were persistently low among mated females when measured at three time points within the 4 days after mating. As a result, soon after mating, the mated females emitted less sex pheromones than virgin females, but there were no significant differences between mated and virgin females at the end of the experiment. Thus, postmating reduction in the rate of emission of sex pheromones could explain previously observed changes in female attractiveness to male T. caelestialium.

  10. Transformative Relation of Kinematical Descriptive Quantities Defined by Different Spatial Referential Frame, Its Property and Application

    NASA Astrophysics Data System (ADS)

    Luo, Ji

    2012-08-01

    Quantitative transformations between corresponding kinetic quantities defined by any two spatial referential frames, whose relative kinematics relations (purely rotational and translational movement) are known, are presented based on necessarily descriptive definitions of the fundamental concepts (instant, time, spatial referential frame that distinguishes from Maths. Coordination, physical point) had being clarified by directly empirical observation with artificially descriptive purpose. Inductive investigation of the transformation reveals that all physical quantities such as charge, temperature, time, volume, length, temporal rate of the quantities and relations like temporal relation between signal source and observer as such are independent to spatial frames transformation except above kinematical quantities transformations, kinematics related dynamics such as Newton ’ s second law existing only in inertial frames and exchange of kinetic energy of mass being valid only in a selected inertial frame. From above bas is, we demonstrate a series of inferences and applications such as phase velocity of light being direct respect to medium (including vacuum) rather than to the frame, using spatial referential frame to describe any measurable field (electric field, magnetic field, gravitational field) and the field ’ s variation; and have tables to contrast and evaluate all aspects of those hypotheses related with spacetime such as distorted spacetime around massive stellar, four dimension spacetime, gravitational time dilation and non - Euclid geometry with new one. The demonstration strongly suggests all the hypotheses are invalid in capable tested concepts ’ meaning and relations. The conventional work on frame transformation and its property, hypothesized by Voigt, Heaviside, Lorentz, Poincare and Einstein a century ago with some mathematical speculation lacking rigorous definition of the fundamental concepts such as instant, time, spatial reference, straight line, plane area, merely good in building up patchwork to do self p referred explanation by making up derivative concepts or accumulating new hypothesis, has disturbed people to describe the physical nature by setting up the sound basis of concept and relations with capable tested method, it’s time to be replaced by empirically effective alternative.

  11. Hoping for More: The Influence of Outcome Desirability on Information Seeking and Predictions about Relative Quantities

    ERIC Educational Resources Information Center

    Scherer, Aaron M.; Windschitl, Paul D.; O'Rourke, Jillian; Smith, Andrew R.

    2012-01-01

    People must often engage in sequential sampling in order to make predictions about the relative quantities of two options. We investigated how directional motives influence sampling selections and resulting predictions in such cases. We used a paradigm in which participants had limited time to sample items and make predictions about which side of…

  12. The Four-Day Workweek: An Assessment of Its Effects on Leisure Participation.

    ERIC Educational Resources Information Center

    Conner, Karen A.; Bultena, Gordon L.

    A research study examined change in the quantity of leisure participation attributable to conversion from a five to a four-day workweek. Change in quantity was defined as (1) frequency change or change in the amount of time devoted to leisure, (2) activity change or change in the number of different leisure activities pursued, and (3) perceptual…

  13. Organic debris in small streams, Prince of Wales Island, Southeast Alaska.

    Treesearch

    Frederick J. Swanson; Mason D. Bryant; George W. Lienkaemper; James R. Sedell

    1984-01-01

    Quantities of coarse and fine organic debris in streams flowing through areas clearcut before 1975 are 3 and 6 times greater than quantities in streams sampled in old-growth stands in Tongass National Forest, central Prince of Wales Island, southeast Alaska. The concentration of debris in streams of clearcut Sitka spruce-western hemlock forests in southeast Alaska,...

  14. Quantity of Parental Involvement: The Influence of the Level of Educational Attainment of Elementary Private School Parents

    ERIC Educational Resources Information Center

    Secord, Deborah K.

    2009-01-01

    The purpose of this research was to determine the influence of the custodial parents' level of educational attainment on the quantity of parental involvement in the areas of assistance with homework, time spent in home activities with the child, communication with teachers, participation in school events, educational discussions with the child,…

  15. Effects of plant density on recombinant hemagglutinin yields in an Agrobacterium-mediated transient gene expression system using Nicotiana benthamiana plants.

    PubMed

    Fujiuchi, Naomichi; Matsuda, Ryo; Matoba, Nobuyuki; Fujiwara, Kazuhiro

    2017-08-01

    Agrobacterium-mediated transient expression systems enable plants to rapidly produce a wide range of recombinant proteins. To achieve economically feasible upstream production and downstream processing, it is beneficial to obtain high levels of two yield-related quantities of upstream production: recombinant protein content per fresh mass of harvested biomass (g gFM -1 ) and recombinant protein productivity per unit area-time (g m -2 /month). Here, we report that the density of Nicotiana benthamiana plants during upstream production had significant impacts on the yield-related quantities of recombinant hemagglutinin (HA). The two quantities were smaller at a high plant density of 400 plants m -2 than at a low plant density of 100 plants m -2 . The smaller quantities at the high plant density were attributed to: (i) a lower HA content in young leaves, which usually have high HA accumulation potentials; (ii) a lower biomass allocation to the young leaves; and (iii) a high area-time requirement for plants. Thus, plant density is a key factor for improving upstream production in Agrobacterium-mediated transient expression systems. Biotechnol. Bioeng. 2017;114: 1762-1770. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Environmental Factor(tm) system: RCRA hazardous waste handler information (on cd-rom). Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-04-01

    Environmental Factor(tm) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information - dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less

  17. Environmental Factor{trademark} system: RCRA hazardous waste handler information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1999-03-01

    Environmental Factor{trademark} RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information -- dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less

  18. Improved methodology to obtain large quantities of correctly folded recombinant N-terminal extracellular domain of the human muscle acetylcholine receptor for inducing experimental autoimmune myasthenia gravis in rats

    PubMed Central

    Sun, Chenjing; Zhang, Hongliang; Xu, Jiang; Gao, Jie

    2013-01-01

    Introduction Human myasthenia gravis (MG) is an autoimmune disorder of the neuromuscular system. Experimental autoimmune myasthenia gravis (EAMG) is a well-established animal model for MG that can be induced by active immunization with the Torpedo californica-derived acetylcholine receptor (AChR). Due to the expensive cost of purifying AChR from Torpedo californica, the development of an easier and more economical way of inducing EAMG remains critically needed. Material and methods Full-length cDNA of the human skeletal muscle AChR α1 subunit was obtained from TE671 cells. The DNA fragment encoding the extracellular domain (ECD) was then amplified by polymerase chain reaction (PCR) and inserted into pET-16b. The reconstructed plasmid was transformed into the host strain BL21(DE3)pLysS, which was derived from Escherichia coli. Isopropyl-β-D-thiogalactopyranoside (IPTG) was used to induce the expression of the N-terminal ECD. The produced protein was purified with immobilized Ni2+ affinity chromatography and refolded by dialysis. Results The recombinant protein was efficiently refolded to soluble active protein, which was verified by ELISA. After immunization with the recombinant ECD, all rats acquired clinical signs of EAMG. The titer of AChR antibodies in the serum was significantly higher in the EAMG group than in the control group, indicating successful induction of EAMG. Conclusions We describe an improved procedure for refolding recombinant ECD of human muscle AChR. This improvement allows for the generation of large quantities of correctly folded recombinant ECD of human muscle AChR, which provides for an easier and more economical way of inducing the animal model of MG. PMID:24904677

  19. Development of k-300 concrete mix for earthquake-resistant Housing infrastructure in indonesia

    NASA Astrophysics Data System (ADS)

    Zulkarnain, Fahrizal

    2018-03-01

    In determining the strength of K-300 concrete mix that is suitable for earthquake-resistant housing infrastructure, it is necessary to research the materials to be used for proper quality and quantity so that the mixture can be directly applied to the resident’s housing, in the quake zone. In the first stage, the examination/sieve analysis of the fine aggregate or sand, and the sieve analysis of the coarse aggregate or gravel will be carried out on the provided sample weighing approximately 40 kilograms. Furthermore, the specific gravity and absorbance of aggregates, the examination of the sludge content of aggregates passing the sieve no. 200, and finally, examination of the weight of the aggregate content. In the second stage, the planned concrete mix by means of the Mix Design K-300 is suitable for use in Indonesia, with implementation steps: Planning of the cement water factor (CWF), Planning of concrete free water (Liters / m3), Planning of cement quantity, Planning of minimum cement content, Planning of adjusted cement water factor, Planning of estimated aggregate composition, Planning of estimated weight of concrete content, Calculation of composition of concrete mixture, Calculation of mixed correction for various water content. Implementation of the above tests also estimates the correction of moisture content and the need for materials of mixture in kilograms for the K-300 mixture, so that the slump inspection result will be achieved in planned 8-12 cm. In the final stage, a compressive strength test of the K-300 experimental mixture is carried out, and subsequently the composition of the K-300 concrete mixture suitable for one sack of cement of 50 kg is obtained for the foundation of the proper dwelling. The composition is consists of use of Cement, Sand, Gravel, and Water.

  20. Geometry of the scalar sector

    DOE PAGES

    Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.

    2016-08-17

    The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less

Top