Sample records for derive accurate values

  1. Determination of pKa values of benzoxa-, benzothia- and benzoselena-zolinone derivatives by capillary electrophoresis. Comparison with potentiometric titration and spectrometric data.

    PubMed

    Foulon, C; Duhal, N; Lacroix-Callens, B; Vaccher, C; Bonte, J P; Goossens, J F

    2007-07-01

    Acidity constants of benzoxa-, benzothia- and benzoselena-zolinone derivatives were determined by capillary electrophoresis, potentiometry and spectrophotometry experiments. These three analytical techniques gave pK(a) results that were in good agreement. A convenient, accurate and precise method for the determination of pK(a) was developed to measure changes in acidity constants induced by heteroatom or 6-benzoyl substituted derivatives. pK(a) values were determined simultaneously for two compounds characterized by different electrophoretic mobility (micro(e)) and pK(a) value and in the presence of an analogous neutral marker.

  2. The use of artificial intelligence technology to predict lymph node spread in men with clinically localized prostate carcinoma.

    PubMed

    Crawford, E D; Batuello, J T; Snow, P; Gamito, E J; McLeod, D G; Partin, A W; Stone, N; Montie, J; Stock, R; Lynch, J; Brandt, J

    2000-05-01

    The current study assesses artificial intelligence methods to identify prostate carcinoma patients at low risk for lymph node spread. If patients can be assigned accurately to a low risk group, unnecessary lymph node dissections can be avoided, thereby reducing morbidity and costs. A rule-derivation technology for simple decision-tree analysis was trained and validated using patient data from a large database (4,133 patients) to derive low risk cutoff values for Gleason sum and prostate specific antigen (PSA) level. An empiric analysis was used to derive a low risk cutoff value for clinical TNM stage. These cutoff values then were applied to 2 additional, smaller databases (227 and 330 patients, respectively) from separate institutions. The decision-tree protocol derived cutoff values of < or = 6 for Gleason sum and < or = 10.6 ng/mL for PSA. The empiric analysis yielded a clinical TNM stage low risk cutoff value of < or = T2a. When these cutoff values were applied to the larger database, 44% of patients were classified as being at low risk for lymph node metastases (0.8% false-negative rate). When the same cutoff values were applied to the smaller databases, between 11 and 43% of patients were classified as low risk with a false-negative rate of between 0.0 and 0.7%. The results of the current study indicate that a population of prostate carcinoma patients at low risk for lymph node metastases can be identified accurately using a simple decision algorithm that considers preoperative PSA, Gleason sum, and clinical TNM stage. The risk of lymph node metastases in these patients is < or = 1%; therefore, pelvic lymph node dissection may be avoided safely. The implications of these findings in surgical and nonsurgical treatment are significant.

  3. Optimization of magnetization transfer measurements: statistical analysis by stochastic simulation. Application to creatine kinase kinetics.

    PubMed

    Rydzy, M; Deslauriers, R; Smith, I C; Saunders, J K

    1990-08-01

    A systematic study was performed to optimize the accuracy of kinetic parameters derived from magnetization transfer measurements. Three techniques were investigated: time-dependent saturation transfer (TDST), saturation recovery (SRS), and inversion recovery (IRS). In the last two methods, one of the resonances undergoing exchange is saturated throughout the experiment. The three techniques were compared with respect to the accuracy of the kinetic parameters derived from experiments performed in a given, fixed, amount of time. Stochastic simulation of magnetization transfer experiments was performed to optimize experimental design. General formulas for the relative accuracies of the unidirectional rate constant (k) were derived for each of the three experimental methods. It was calculated that for k values between 0.1 and 1.0 s-1, T1 values between 1 and 10 s, and relaxation delays appropriate for the creatine kinase reaction, the SRS method yields more accurate values of k than does the IRS method. The TDST method is more accurate than the SRS method for reactions where T1 is long and k is large, within the range of k and T1 values examined. Experimental verification of the method was carried out on a solution in which the forward (PCr----ATP) rate constant (kf) of the creatine kinase reaction was measured.

  4. Proton dissociation properties of arylphosphonates: Determination of accurate Hammett equation parameters.

    PubMed

    Dargó, Gergő; Bölcskei, Adrienn; Grün, Alajos; Béni, Szabolcs; Szántó, Zoltán; Lopata, Antal; Keglevich, György; Balogh, György T

    2017-09-05

    Determination of the proton dissociation constants of several arylphosphonic acid derivatives was carried out to investigate the accuracy of the Hammett equations available for this family of compounds. For the measurement of the pK a values modern, accurate methods, such as the differential potentiometric titration and NMR-pH titration were used. We found our results significantly different from the pK a values reported before (pK a1 : MAE = 0.16 pK a2 : MAE=0.59). Based on our recently measured pK a values, refined Hammett equations were determined that might be used for predicting highly accurate ionization constants of newly synthesized compounds (pK a1 =1.70-0.894σ, pK a2 =6.92-0.934σ). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Development of an Automatic Differentiation Version of the FPX Rotor Code

    NASA Technical Reports Server (NTRS)

    Hu, Hong

    1996-01-01

    The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.

  6. Edge detection of magnetic anomalies using analytic signal of tilt angle (ASTA)

    NASA Astrophysics Data System (ADS)

    Alamdar, K.; Ansari, A. H.; Ghorbani, A.

    2009-04-01

    Magnetic is a commonly used geophysical technique to identify and image potential subsurface targets. Interpretation of magnetic anomalies is a complex process due to the superposition of multiple magnetic sources, presence of geologic and cultural noise and acquisition and positioning error. Both the vertical and horizontal derivatives of potential field data are useful; horizontal derivative, enhance edges whereas vertical derivative narrow the width of anomaly and so locate source bodies more accurately. We can combine vertical and horizontal derivative of magnetic field to achieve analytic signal which is independent to body magnetization direction and maximum value of this lies over edges of body directly. Tilt angle filter is phased-base filter and is defined as angle between vertical derivative and total horizontal derivative. Tilt angle value differ from +90 degree to -90 degree and its zero value lies over body edge. One of disadvantage of this filter is when encountering with deep sources the detected edge is blurred. For overcome this problem many authors introduced new filters such as total horizontal derivative of tilt angle or vertical derivative of tilt angle which Because of using high-order derivative in these filters results may be too noisy. If we combine analytic signal and tilt angle, a new filter termed (ASTA) is produced which its maximum value lies directly over body edge and is easer than tilt angle to delineate body edge and no complicity of tilt angle. In this work new filter has been demonstrated on magnetic data from an area in Sar- Cheshme region in Iran. This area is located in 55 degree longitude and 32 degree latitude and is a copper potential region. The main formation in this area is Andesith and Trachyandezite. Magnetic surveying was employed to separate the boundaries of Andezite and Trachyandezite from adjacent area. In this regard a variety of filters such as analytic signal, tilt angle and ASTA filter have been applied which new ASTA filter determined Andezite boundaries from surrounded more accurately than other filters. Keywords: Horizontal derivative, Vertical derivative, Tilt angle, Analytic signal, ASTA, Sar-Cheshme.

  7. Accuracy of genomic breeding values in multibreed beef cattle populations derived from deregressed breeding values and phenotypes.

    PubMed

    Weber, K L; Thallman, R M; Keele, J W; Snelling, W M; Bennett, G L; Smith, T P L; McDaneld, T G; Allan, M F; Van Eenennaam, A L; Kuehn, L A

    2012-12-01

    Genomic selection involves the assessment of genetic merit through prediction equations that allocate genetic variation with dense marker genotypes. It has the potential to provide accurate breeding values for selection candidates at an early age and facilitate selection for expensive or difficult to measure traits. Accurate across-breed prediction would allow genomic selection to be applied on a larger scale in the beef industry, but the limited availability of large populations for the development of prediction equations has delayed researchers from providing genomic predictions that are accurate across multiple beef breeds. In this study, the accuracy of genomic predictions for 6 growth and carcass traits were derived and evaluated using 2 multibreed beef cattle populations: 3,358 crossbred cattle of the U.S. Meat Animal Research Center Germplasm Evaluation Program (USMARC_GPE) and 1,834 high accuracy bull sires of the 2,000 Bull Project (2000_BULL) representing influential breeds in the U.S. beef cattle industry. The 2000_BULL EPD were deregressed, scaled, and weighted to adjust for between- and within-breed heterogeneous variance before use in training and validation. Molecular breeding values (MBV) trained in each multibreed population and in Angus and Hereford purebred sires of 2000_BULL were derived using the GenSel BayesCπ function (Fernando and Garrick, 2009) and cross-validated. Less than 10% of large effect loci were shared between prediction equations trained on (USMARC_GPE) relative to 2000_BULL although locus effects were moderately to highly correlated for most traits and the traits themselves were highly correlated between populations. Prediction of MBV accuracy was low and variable between populations. For growth traits, MBV accounted for up to 18% of genetic variation in a pooled, multibreed analysis and up to 28% in single breeds. For carcass traits, MBV explained up to 8% of genetic variation in a pooled, multibreed analysis and up to 42% in single breeds. Prediction equations trained in multibreed populations were more accurate for Angus and Hereford subpopulations because those were the breeds most highly represented in the training populations. Accuracies were less for prediction equations trained in a single breed due to the smaller number of records derived from a single breed in the training populations.

  8. Hard-spin mean-field theory: A systematic derivation and exact correlations in one dimension

    PubMed

    Kabakcioglu

    2000-04-01

    Hard-spin mean-field theory is an improved mean-field approach which has proven to give accurate results, especially for frustrated spin systems, with relatively little computational effort. In this work, the previous phenomenological derivation is supplanted by a systematic and generic derivation that opens the possibility for systematic improvements, especially for the calculation of long-range correlation functions. A first level of improvement suffices to recover the exact long-range values of the correlation functions in one dimension.

  9. Structure determination from XAFS using high-accuracy measurements of x-ray mass attenuation coefficients of silver, 11 keV-28 keV, and development of an all-energies approach to local dynamical analysis of bond length, revealing variation of effective thermal contributions across the XAFS spectrum.

    PubMed

    Tantau, L J; Chantler, C T; Bourke, J D; Islam, M T; Payne, A T; Rae, N A; Tran, C Q

    2015-07-08

    We use the x-ray extended range technique (XERT) to experimentally determine the mass attenuation coefficient of silver in the x-ray energy range 11 kev-28 kev including the silver K absorption edge. The results are accurate to better than 0.1%, permitting critical tests of atomic and solid state theory. This is one of the most accurate demonstrations of cross-platform accuracy in synchrotron studies thus far. We derive the mass absorption coefficients and the imaginary component of the form factor over this range. We apply conventional XAFS analytic techniques, extended to include error propagation and uncertainty, yielding bond lengths accurate to approximately 0.24% and thermal Debye-Waller parameters accurate to 30%. We then introduce the FDMX technique for accurate analysis of such data across the full XAFS spectrum, built on full-potential theory, yielding a bond length accuracy of order 0.1% and the demonstration that a single Debye parameter is inadequate and inconsistent across the XAFS range. Two effective Debye-Waller parameters are determined: a high-energy value based on the highly-correlated motion of bonded atoms (σ(DW) = 0.1413(21) Å), and an uncorrelated bulk value (σ(DW) = 0.1766(9) Å) in good agreement with that derived from (room-temperature) crystallography.

  10. A Fast and Effective Pyridine-Free Method for the Determination of Hydroxyl Value of Hydroxyl-Terminated Polybutadiene and Other Hydroxy Compounds

    NASA Astrophysics Data System (ADS)

    Alex, Ancy Smitha; Kumar, Vijendra; Sekkar, V.; Bandyopadhyay, G. G.

    2017-07-01

    Hydroxyl-terminated polybutadiene (HTPB) is the workhorse propellant binder for launch vehicle and missile applications. Accurate determination of the hydroxyl value (OHV) of HTPB is crucial for tailoring the ultimate mechanical and ballistic properties of the propellant derived. This article describes a fast and effective methodology free of pyridine based on acetic anhydride, N-methyl imidazole, and toluene for the determination of OHV of nonpolar polymers like HTPB and other hydroxyl compounds. This method gives accurate and reproducible results comparable to standard methods and is superior to existing methods in terms of user friendliness, efficiency, and time requirement.

  11. Elemental abundance analyses with coadded DAO spectrograms. IV - Revision of previous analyses. V - The mercury-manganese stars Phi Herculis, 28 Herculis and HR 7664

    NASA Technical Reports Server (NTRS)

    Adelman, Saul J.

    1988-01-01

    Changes in chromium, manganese, and nickel abundances derived from singly ionized lines are incorporated into the elemental abundance of Adelman and Hill (1987) in order to provide more accurate gf values and damping constants for several atomic species. An improved agreement with the values from neutral lines of the same element is found. In the second part, the method is applied to an elemental abundance analysis of three mercury-manganese stars, and correlations are found between the derived abundances and the effective temperature.

  12. The Effect of Starspots on Accurate Radius Determination of the Low-Mass Double-Lined Eclipsing Binary Gu Boo

    NASA Astrophysics Data System (ADS)

    Windmiller, G.; Orosz, J. A.; Etzel, P. B.

    2010-04-01

    GU Boo is one of only a relatively small number of well-studied double-lined eclipsing binaries that contain low-mass stars. López-Morales & Ribas present a comprehensive analysis of multi-color light and radial velocity curves for this system. The GU Boo light curves presented by López-Morales & Ribas had substantial asymmetries, which were attributed to large spots. In spite of the asymmetry, López-Morales & Ribas derived masses and radii accurate to sime2%. We obtained additional photometry of GU Boo using both a CCD and a single-channel photometer and modeled the light curves with the ELC software to determine if the large spots in the light curves give rise to systematic errors at the few percent level. We also modeled the original light curves from the work of López-Morales & Ribas using models with and without spots. We derived a radius of the primary of 0.6329 ± 0.0026 R sun, 0.6413 ± 0.0049 R sun, and 0.6373 ± 0.0029 R sun from the CCD, photoelectric, and López-Morales & Ribas data, respectively. Each of these measurements agrees with the value reported by López-Morales & Ribas (R 1 = 0.623 ± 0.016 R sun) at the level of ≈2%. In addition, the spread in these values is ≈1%-2% from the mean. For the secondary, we derive radii of 0.6074 ± 0.0035 R sun, 0.5944 ± 0.0069 R sun, and 0.5976 ± 0.0059 R sun from the three respective data sets. The López-Morales & Ribas value is R 2 = 0.620 ± 0.020 R sun, which is ≈2%-3% larger than each of the three values we found. The spread in these values is ≈2% from the mean. The systematic difference between our three determinations of the secondary radius and that of López-Morales & Ribas might be attributed to differences in the modeling process and codes used. Our own fits suggest that, for GU Boo at least, using accurate spot modeling of a single set of multi-color light curves results in radii determinations accurate at the ≈2% level.

  13. THE EFFECT OF STARSPOTS ON ACCURATE RADIUS DETERMINATION OF THE LOW-MASS DOUBLE-LINED ECLIPSING BINARY GU Boo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Windmiller, G.; Orosz, J. A.; Etzel, P. B., E-mail: windmill@rohan.sdsu.ed, E-mail: orosz@sciences.sdsu.ed, E-mail: etzel@sciences.sdsu.ed

    2010-04-01

    GU Boo is one of only a relatively small number of well-studied double-lined eclipsing binaries that contain low-mass stars. Lopez-Morales and Ribas present a comprehensive analysis of multi-color light and radial velocity curves for this system. The GU Boo light curves presented by Lopez-Morales and Ribas had substantial asymmetries, which were attributed to large spots. In spite of the asymmetry, Lopez-Morales and Ribas derived masses and radii accurate to {approx_equal}2%. We obtained additional photometry of GU Boo using both a CCD and a single-channel photometer and modeled the light curves with the ELC software to determine if the large spotsmore » in the light curves give rise to systematic errors at the few percent level. We also modeled the original light curves from the work of Lopez-Morales and Ribas using models with and without spots. We derived a radius of the primary of 0.6329 +- 0.0026 R{sub sun}, 0.6413 +- 0.0049 R{sub sun}, and 0.6373 +- 0.0029 R{sub sun} from the CCD, photoelectric, and Lopez-Morales and Ribas data, respectively. Each of these measurements agrees with the value reported by Lopez-Morales and Ribas (R{sub 1} = 0.623 +- 0.016 R{sub sun}) at the level of {approx}2%. In addition, the spread in these values is {approx}1%-2% from the mean. For the secondary, we derive radii of 0.6074 +- 0.0035 R{sub sun}, 0.5944 +- 0.0069 R{sub sun}, and 0.5976 +- 0.0059 R{sub sun} from the three respective data sets. The Lopez-Morales and Ribas value is R{sub 2} = 0.620 +- 0.020 R{sub sun}, which is {approx}2%-3% larger than each of the three values we found. The spread in these values is {approx}2% from the mean. The systematic difference between our three determinations of the secondary radius and that of Lopez-Morales and Ribas might be attributed to differences in the modeling process and codes used. Our own fits suggest that, for GU Boo at least, using accurate spot modeling of a single set of multi-color light curves results in radii determinations accurate at the {approx}2% level.« less

  14. Accurate physical laws can permit new standard units: The two laws F→=ma→ and the proportionality of weight to mass

    NASA Astrophysics Data System (ADS)

    Saslow, Wayne M.

    2014-04-01

    Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.

  15. Atmospheric densities derived from CHAMP/STAR accelerometer observations

    NASA Astrophysics Data System (ADS)

    Bruinsma, S.; Tamagnan, D.; Biancale, R.

    2004-03-01

    The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi-diurnal components are too strong in the DTM models, while all three models model the latitudinal gradient inaccurately. Using DTM-2000 as a priori, certain model coefficients were re-estimated using the STAR-derived densities, yielding the DTM-STAR test model. The mean and rms of the global density ratios of this preliminary model are 1.00 and 15%, respectively, while the tidal and latitudinal modelling errors become small. This test model is only representative of high solar activity conditions, while the seasonal effect is probably not estimated accurately due to correlation with the solar activity effect. At least one more year of data is required to separate the seasonal effect from the solar activity effect, and data taken under low solar activity conditions must also be assimilated to construct a model representative under all circumstances.

  16. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  17. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  18. Anatomical evaluation and stress distribution of intact canine femur.

    PubMed

    Verim, Ozgur; Tasgetiren, Suleyman; Er, Mehmet S; Ozdemir, Vural; Yuran, Ahmet F

    2013-03-01

    In the biomedical field, three-dimensional (3D) modeling and analysis of bones and tissues has steadily gained in importance. The aim of this study was to produce more accurate 3D models of the canine femur derived from computed tomography (CT) data by using several modeling software programs and two different methods. The accuracy of the analysis depends on the modeling process and the right boundary conditions. Solidworks, Rapidform, Inventor, and 3DsMax software programs were used to create 3D models. Data derived from CT were converted into 3D models using two different methods: in the first, 3D models were generated using boundary lines, while in the second, 3D models were generated using point clouds. Stress analyses in the models were made by ANSYS v12, also considering any muscle forces acting on the canine femur. When stress values and statistical values were taken into consideration, more accurate models were obtained with the point cloud method. It was found that the maximum von Mises stress on the canine femur shaft was 34.8 MPa. Stress and accuracy values were obtained from the model formed using the Rapidform software. The values obtained were similar to those in other studies in the literature. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Stochastic inversion of ocean color data using the cross-entropy method.

    PubMed

    Salama, Mhd Suhyb; Shen, Fang

    2010-01-18

    Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.

  20. Lift and moment coefficients expanded to the seventh power of frequency for oscillating rectangular wings in supersonic flow and applied to a specific flutter problem

    NASA Technical Reports Server (NTRS)

    Nelson, Herbert C; Rainey, Ruby A; Watkins, Charles E

    1954-01-01

    Linearized theory for compressible unsteady flow is used to derive the velocity potential and lift and moment coefficients in the form of oscillating rectangular wing moving at a constant supersonic speed. Closed expressions for the velocity potential and lift and moment coefficients associated with pitching and translation are given to seventh power of the frequency. These expressions extend the range of usefulness of NACA report 1028 in which similar expressions were derived to the third power of the frequency of oscillation. For example, at a Mach number of 10/9 the expansion of the potential to the third power is an accurate representation of the potential for values of the reduced frequency only up to about 0.08; whereas the expansion of the potential to the seventh power is an accurate representation for values of the reduced frequency up to about 0.2. The section and total lift and moment coefficients are discussed with the aid of several figures. In addition, flutter speeds obtained in the Mach number range from 10/9 to 10/6 for a rectangular wing of aspect ratio 4.53 by using section coefficients derived on the basis of three-dimensional flow are compared with flutter speeds for this wing obtained by using coefficients derived on the basis of two-dimensional flow.

  1. Laboratory oscillator strengths of Sc i in the near-infrared region for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Pehlivan, A.; Nilsson, H.; Hartman, H.

    2015-10-01

    Context. Atomic data is crucial for astrophysical investigations. To understand the formation and evolution of stars, we need to analyse their observed spectra. Analysing a spectrum of a star requires information about the properties of atomic lines, such as wavelengths and oscillator strengths. However, atomic data of some elements are scarce, particularly in the infrared region, and this paper is part of an effort to improve the situation on near-IR atomic data. Aims: This paper investigates the spectrum of neutral scandium, Sc I, from laboratory measurements and improves the atomic data of Sc I lines in the infrared region covering lines in R, I, J, and K bands. Especially, we focus on measuring oscillator strengths for Sc I lines connecting the levels with 4p and 4s configurations. Methods: We combined experimental branching fractions with radiative lifetimes from the literature to derive oscillator strengths (f-values). Intensity-calibrated spectra with high spectral resolution were recorded with Fourier transform spectrometer from a hollow cathode discharge lamp. The spectra were used to derive accurate oscillator strengths and wavelengths for Sc I lines, with emphasis on the infrared region. Results: This project provides the first set of experimental Sc I lines in the near-infrared region for accurate spectral analysis of astronomical objects. We derived 63 log(gf) values for the lines between 5300 Å and 24 300 Å. The uncertainties in the f-values vary from 5% to 20%. The small uncertainties in our values allow for an increased accuracy in astrophysical abundance determinations.

  2. Correcting the spectroscopic surface gravity using transits and asteroseismology. No significant effect on temperatures or metallicities with ARES and MOOG in local thermodynamic equilibrium

    NASA Astrophysics Data System (ADS)

    Mortier, A.; Sousa, S. G.; Adibekyan, V. Zh.; Brandão, I. M.; Santos, N. C.

    2014-12-01

    Context. Precise stellar parameters (effective temperature, surface gravity, metallicity, stellar mass, and radius) are crucial for several reasons, amongst which are the precise characterization of orbiting exoplanets and the correct determination of galactic chemical evolution. The atmospheric parameters are extremely important because all the other stellar parameters depend on them. Using our standard equivalent-width method on high-resolution spectroscopy, good precision can be obtained for the derived effective temperature and metallicity. The surface gravity, however, is usually not well constrained with spectroscopy. Aims: We use two different samples of FGK dwarfs to study the effect of the stellar surface gravity on the precise spectroscopic determination of the other atmospheric parameters. Furthermore, we present a straightforward formula for correcting the spectroscopic surface gravities derived by our method and with our linelists. Methods: Our spectroscopic analysis is based on Kurucz models in local thermodynamic equilibrium, performed with the MOOG code to derive the atmospheric parameters. The surface gravity was either left free or fixed to a predetermined value. The latter is either obtained through a photometric transit light curve or derived using asteroseismology. Results: We find first that, despite some minor trends, the effective temperatures and metallicities for FGK dwarfs derived with the described method and linelists are, in most cases, only affected within the errorbars by using different values for the surface gravity, even for very large differences in surface gravity, so they can be trusted. The temperatures derived with a fixed surface gravity continue to be compatible within 1 sigma with the accurate results of the infrared flux method (IRFM), as is the case for the unconstrained temperatures. Secondly, we find that the spectroscopic surface gravity can easily be corrected to a more accurate value using a linear function with the effective temperature. Tables 1 and 2 are available in electronic form at http://www.aanda.org

  3. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  4. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431

  5. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    PubMed

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  6. Fractional Derivative Models for Ultrasonic Characterization of Polymer and Breast Tissue Viscoelasticity

    PubMed Central

    Coussot, Cecile; Kalyanam, Sureshkumar; Yapp, Rebecca; Insana, Michael F.

    2009-01-01

    The viscoelastic response of hydropolymers, which include glandular breast tissues, may be accurately characterized for some applications with as few as 3 rheological parameters by applying the Kelvin-Voigt fractional derivative (KVFD) modeling approach. We describe a technique for ultrasonic imaging of KVFD parameters in media undergoing unconfined, quasi-static, uniaxial compression. We analyze the KVFD parameter values in simulated and experimental echo data acquired from phantoms and show that the KVFD parameters may concisely characterize the viscoelastic properties of hydropolymers. We then interpret the KVFD parameter values for normal and cancerous breast tissues and hypothesize that this modeling approach may ultimately be applied to tumor differentiation. PMID:19406700

  7. Comparison of methods for accurate end-point detection of potentiometric titrations

    NASA Astrophysics Data System (ADS)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  8. Numerical Solutions of the Mean-Value Theorem: New Methods for Downward Continuation of Potential Fields

    NASA Astrophysics Data System (ADS)

    Zhang, Chong; Lü, Qingtian; Yan, Jiayong; Qi, Guang

    2018-04-01

    Downward continuation can enhance small-scale sources and improve resolution. Nevertheless, the common methods have disadvantages in obtaining optimal results because of divergence and instability. We derive the mean-value theorem for potential fields, which could be the theoretical basis of some data processing and interpretation. Based on numerical solutions of the mean-value theorem, we present the convergent and stable downward continuation methods by using the first-order vertical derivatives and their upward continuation. By applying one of our methods to both the synthetic and real cases, we show that our method is stable, convergent and accurate. Meanwhile, compared with the fast Fourier transform Taylor series method and the integrated second vertical derivative Taylor series method, our process has very little boundary effect and is still stable in noise. We find that the characters of the fading anomalies emerge properly in our downward continuation with respect to the original fields at the lower heights.

  9. Validation of one-mile walk equations for the estimation of aerobic fitness in British military personnel under the age of 40 years.

    PubMed

    Lunt, Heather; Roiz De Sa, Daniel; Roiz De Sa, Julia; Allsopp, Adrian

    2013-07-01

    To provide an accurate estimate of peak oxygen uptake (VO2 peak) for British Royal Navy Personnel aged between 18 and 39, comparing a gold standard treadmill based maximal exercise test with a submaximal one-mile walk test. Two hundred military personnel consented to perform a treadmill-based VO2 peak test and two one-mile walk tests round an athletics track. The estimated VO2 peak values from three different one-mile walk equations were compared to directly measured VO2 peak values from the treadmill-based test. One hundred participants formed a validation group from which a new equation was derived and the other 100 participants formed the cross-validation group. Existing equations underestimated the VO2 peak values of the fittest personnel and overestimated the VO2 peak of the least aerobically fit by between 2% and 18%. The new equation derived from the validation group has less bias, the highest correlation with the measured values (r = 0.83), and classified the most people correctly according to the Royal Navy's Fitness Test standards, producing the fewest false positives and false negatives combined (9%). The new equation will provide a more accurate estimate of VO2 peak for a British military population aged 18 to 39. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  10. Enthalpies of Formation of Hydrazine and Its Derivatives.

    PubMed

    Dorofeeva, Olga V; Ryzhova, Oxana N; Suchkova, Taisiya A

    2017-07-20

    Enthalpies of formation, Δ f H 298 ° , in both the gas and condensed phase, and enthalpies of sublimation or vaporization have been estimated for hydrazine, NH 2 NH 2 , and its 36 various derivatives using quantum chemical calculations. The composite G4 method has been used along with isodesmic reaction schemes to derive a set of self-consistent high-accuracy gas-phase enthalpies of formation. To estimate the enthalpies of sublimation and vaporization with reasonable accuracy (5-20 kJ/mol), the method of molecular electrostatic potential (MEP) has been used. The value of Δ f H 298 ° (NH 2 NH 2 ,g) = 97.0 ± 3.0 kJ/mol was determined from 75 isogyric reactions involving about 50 reference species; for most of these species, the accurate Δ f H 298 ° (g) values are available in Active Thermochemical Tables (ATcT). The calculated value is in excellent agreement with the reported results of the most accurate models based on coupled cluster theory (97.3 kJ/mol, the average of six calculations). Thus, the difference between the values predicted by high-level theoretical calculations and the experimental value of Δ f H 298 ° (NH 2 NH 2 ,g) = 95.55 ± 0.19 kJ/mol recommended in the ATcT and other comprehensive reference sources is sufficiently large and requires further investigation. Different hydrazine derivatives have been also considered in this work. For some of them, both the enthalpy of formation in the condensed phase and the enthalpy of sublimation or vaporization are available; for other compounds, experimental data for only one of these properties exist. Evidence of accuracy of experimental data for the first group of compounds was provided by the agreement with theoretical Δ f H 298 ° (g) value. The unknown property for the second group of compounds was predicted using the MEP model. This paper presents a systematic comparison of experimentally determined enthalpies of formation and enthalpies of sublimation or vaporization with the results of calculations. Because of relatively large uncertainty in the estimated enthalpies of sublimation, it was not always possible to evaluate the accuracy of the experimental values; however, this model allowed us to detect large errors in the experimental data, as in the case of 5,5'-hydrazinebistetrazole. The enthalpies of formation and enthalpies of sublimation or vaporization have been predicted for the first time for ten hydrazine derivatives with no experimental data. A recommended set of self-consistent experimental and calculated gas-phase enthalpies of formation of hydrazine derivatives can be used as reference Δ f H 298 ° (g) values to predict the enthalpies of formation of various hydrazines by means of isodesmic reactions.

  11. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  12. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  13. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    USGS Publications Warehouse

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  14. HPTLC Determination of Artemisinin and Its Derivatives in Bulk and Pharmaceutical Dosage

    NASA Astrophysics Data System (ADS)

    Agarwal, Suraj P.; Ahuja, Shipra

    A simple, selective, accurate, and precise high-performance thin-layer chromatographic (HPTLC) method has been established and validated for the analysis of artemisinin and its derivatives (artesunate, artemether, and arteether) in the bulk drugs and formulations. The artemisinin, artesunate, artemether, and arteether were separated on aluminum-backed silica gel 60 F254 plates with toluene:ethyl acetate (10:1), toluene: ethyl acetate: acetic acid (2:8:0.2), toluene:butanol (10:1), and toluene:dichloro methane (0.5:10) mobile phase, respectively. The linear detector response for concentrations between 100 and 600 ng/spot showed good linear relationship with r value 0.9967, 0.9989, 0.9981 and 0.9989 for artemisinin, artesunate, artemether, and arteether, respectively. Statistical analysis proves that the method is precise, accurate, and reproducible and hence can be employed for the routine analysis.

  15. Real-ear-to-coupler difference predictions as a function of age for two coupling procedures.

    PubMed

    Bagatto, Marlene P; Scollie, Susan D; Seewald, Richard C; Moodie, K Shane; Hoover, Brenda M

    2002-09-01

    The predicted real-ear-to-coupler difference (RECD) values currently used in pediatric hearing instrument prescription methods are based on 12-month age range categories and were derived from measures using standard acoustic immittance probe tips. Consequently, the purpose of this study was to develop normative RECD predicted values for foam/acoustic immittance tips and custom earmolds across the age continuum. To this end, RECD data were collected on 392 infants and children (141 with acoustic immittance tips, 251 with earmolds) to develop normative regression equations for use in deriving continuous age predictions of RECDs for foam/acoustic immittance tips and earmolds. Owing to the substantial between-subject variability observed in the data, the predictive equations of RECDs by age (in months) resulted in only gross estimates of RECD values (i.e., within +/- 4.4 dB for 95% of acoustic immittance tip measures; within +/- 5.4 dB in 95% of measures with custom earmolds) across frequency. Thus, it is concluded that the estimates derived from this study should not be used to replace the more precise individual RECD measurements. Relative to previously available normative RECD values for infants and young children, however, the estimates derived through this study provide somewhat more accurate predicted values for use under those circumstances for which individual RECD measurements cannot be made.

  16. Measuring Gas-Phase Basicities of Amino Acids Using an Ion Trap Mass Spectrometer: A Physical Chemistry Laboratory Experiment

    ERIC Educational Resources Information Center

    Sunderlin, Lee S.; Ryzhov, Victor; Keller, Lanea M. M.; Gaillard, Elizabeth R.

    2005-01-01

    An experiment is performed to measure the relative gas-phase basicities of a series of five amino acids to compare the results to literature values. The experiments use the kinetic method for deriving ion thermochemistry and allow students to perform accurate measurements of thermodynamics in a relatively short time.

  17. Hyperpolarized 15N-pyridine Derivatives as pH-Sensitive MRI Agents

    PubMed Central

    Jiang, Weina; Lumata, Lloyd; Chen, Wei; Zhang, Shanrong; Kovacs, Zoltan; Sherry, A. Dean; Khemtong, Chalermchai

    2015-01-01

    Highly sensitive MR imaging agents that can accurately and rapidly monitor changes in pH would have diagnostic and prognostic value for many diseases. Here, we report an investigation of hyperpolarized 15N-pyridine derivatives as ultrasensitive pH-sensitive imaging probes. These molecules are easily polarized to high levels using standard dynamic nuclear polarization (DNP) techniques and their 15N chemical shifts were found to be highly sensitive to pH. These probes displayed sharp 15N resonances and large differences in chemical shifts (Δδ >90 ppm) between their free base and protonated forms. These favorable features make these agents highly suitable candidates for the detection of small changes in tissue pH near physiological values. PMID:25774436

  18. Accuracy of Self-Reported Physical Activity Levels in Obese Adolescents

    PubMed Central

    Elliott, Sarah A.; Baxter, Kimberley A.; Davies, Peter S. W.; Truby, Helen

    2014-01-01

    Introduction. Self-reported measures of habitual physical activity rely completely on the respondent's ability to provide accurate information on their own physical activity behaviours. Our aim was to investigate if obese adolescents could accurately report their physical activity levels (PAL) using self-reported diaries. Methods. Total energy expenditure (TEE) was measured using doubly labelled water (DLW) and resting energy expenditure (REE) was measured via indirect calorimetry. Activity energy expenditure (AEE) and PAL values were derived from measured TEE and REE. Self-reported, four-day activity diaries were used to calculate daily MET values and averaged to give an estimated PAL value (ePAL). Results. Twenty-two obese adolescents, mean age 13.2 ± 1.8 years, mean BMI 31.3 ± 4.6 kg/m2, completed the study. No significant differences between mean measured and estimated PAL values were observed (1.37 ± 0.13 versus 1.40 ± 0.34, P = 0.74). Bland Altman analysis illustrated a significant relationship (r = −0.76, P < 0.05) between the two methods; thus the bias was not consistent across a range of physical activity levels, with the more inactive overreporting their physical activity. Conclusion. At an individual level, obese adolescents are unlikely to be able to provide an accurate estimation of their own activity. PMID:25247095

  19. An accurate density functional theory based estimation of pK(a) values of polar residues combined with experimental data: from amino acids to minimal proteins.

    PubMed

    Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru

    2012-03-28

    We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.

  20. The mortality risk score and the ADG score: two points-based scoring systems for the Johns Hopkins aggregated diagnosis groups to predict mortality in a general adult population cohort in Ontario, Canada.

    PubMed

    Austin, Peter C; Walraven, Carl van

    2011-10-01

    Logistic regression models that incorporated age, sex, and indicator variables for the Johns Hopkins' Aggregated Diagnosis Groups (ADGs) categories have been shown to accurately predict all-cause mortality in adults. To develop 2 different point-scoring systems using the ADGs. The Mortality Risk Score (MRS) collapses age, sex, and the ADGs to a single summary score that predicts the annual risk of all-cause death in adults. The ADG Score derives weights for the individual ADG diagnosis groups. : Retrospective cohort constructed using population-based administrative data. All 10,498,413 residents of Ontario, Canada, between the age of 20 and 100 years who were alive on their birthday in 2007, participated in this study. Participants were randomly divided into derivation and validation samples. : Death within 1 year. In the derivation cohort, the MRS ranged from -21 to 139 (median value 29, IQR 17 to 44). In the validation group, a logistic regression model with the MRS as the sole predictor significantly predicted the risk of 1-year mortality with a c-statistic of 0.917. A regression model with age, sex, and the ADG Score has similar performance. Both methods accurately predicted the risk of 1-year mortality across the 20 vigintiles of risk. The MRS combined values for a person's age, sex, and the John Hopkins ADGs to accurately predict 1-year mortality in adults. The ADG Score is a weighted score representing the presence or absence of the 32 ADG diagnosis groups. These scores will facilitate health services researchers conducting risk adjustment using administrative health care databases.

  1. Estimation of height and body mass index from demi-span in elderly individuals.

    PubMed

    Weinbrenner, Tanja; Vioque, Jesús; Barber, Xavier; Asensio, Laura

    2006-01-01

    Obtaining accurate height and, consequently, body mass index (BMI) measurements in elderly subjects can be difficult due to changes in posture and loss of height during ageing. Measurements of other body segments can be used as an alternative to estimate standing height, but population- and age-specific equations are necessary. Our objectives were to validate existing equations, to develop new simple equations to predict height in an elderly Spanish population and to assess the accuracy of the BMI calculated by estimated height from the new equations. We measured height and demi-span in a representative sample of 592 individuals, 271 men and 321 women, 65 years and older (mean +/- SD, 73.8 +/- 6.3 years). We suggested equations to predict height from demi-span by multiple regression analyses and performed an agreement analysis between measured and estimated indices. Height estimated from demi-span correlated significantly (p < 0.001) with measured height (men: r = 0.708, women: r = 0.625). The best prediction equations were as follows: men, height (in cm) = 77.821 + (1.132 x demi-span in cm) + (-0.215 x 5-year age category); women: height (in cm) = 88.854 + (0.899 x demi-span in cm) + (-0.692 x 5-year age category). No significant differences between the mean values of estimated and measured heights were found for men (-0.03 +/- 4.6 cm) or women (-0.02 +/- 4.1 cm). The BMI derived from measured height did not differ significantly from the BMI derived from estimated height either. Predicted height values from equations based on demi-span and age may be acceptable surrogates to derive accurate nutritional indices such as the BMI, particularly in elderly populations, where height may be difficult to measure accurately.

  2. Validation of the Five-Phase Method for Simulating Complex Fenestration Systems with Radiance against Field Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.

    2016-08-29

    The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less

  3. Radiative transfer in the surfaces of atmosphereless bodies. III - Interpretation of lunar photometry

    NASA Technical Reports Server (NTRS)

    Lumme, K.; Irvine, W. M.

    1982-01-01

    Narrowband and UBV photoelectric phase curves of the entire lunar disk and surface photometry of some craters have been interpreted using a newly developed generalized radiative transfer theory for planetary regoliths. The data are well fitted by the theory, yielding information on both macroscopic and microscopic lunar properties. Derived values for the integrated disk geometric albedo are considerably higher than quoted previously, because of the present inclusion of an accurately determined opposition effect. The mean surface roughness, defined as the ratio of the height to the radius of a typical irregularity, is found to be 0.9 + or - 0.1, or somewhat less than the mean value of 1.2 obtained for the asteroids. From the phase curves, wavelength-dependent values of the single scattering albedo and the Henyey-Greenstein asymmetry factor for the average surface particle are derived.

  4. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    PubMed

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  5. Experimental and theoretical oscillator strengths of Mg I for accurate abundance analysis

    NASA Astrophysics Data System (ADS)

    Pehlivan Rhodin, A.; Hartman, H.; Nilsson, H.; Jönsson, P.

    2017-02-01

    Context. With the aid of stellar abundance analysis, it is possible to study the galactic formation and evolution. Magnesium is an important element to trace the α-element evolution in our Galaxy. For chemical abundance analysis, such as magnesium abundance, accurate and complete atomic data are essential. Inaccurate atomic data lead to uncertain abundances and prevent discrimination between different evolution models. Aims: We study the spectrum of neutral magnesium from laboratory measurements and theoretical calculations. Our aim is to improve the oscillator strengths (f-values) of Mg I lines and to create a complete set of accurate atomic data, particularly for the near-IR region. Methods: We derived oscillator strengths by combining the experimental branching fractions with radiative lifetimes reported in the literature and computed in this work. A hollow cathode discharge lamp was used to produce free atoms in the plasma and a Fourier transform spectrometer recorded the intensity-calibrated high-resolution spectra. In addition, we performed theoretical calculations using the multiconfiguration Hartree-Fock program ATSP2K. Results: This project provides a set of experimental and theoretical oscillator strengths. We derived 34 experimental oscillator strengths. Except from the Mg I optical triplet lines (3p 3P°0,1,2-4s 3S1), these oscillator strengths are measured for the first time. The theoretical oscillator strengths are in very good agreement with the experimental data and complement the missing transitions of the experimental data up to n = 7 from even and odd parity terms. We present an evaluated set of oscillator strengths, gf, with uncertainties as small as 5%. The new values of the Mg I optical triplet line (3p 3P°0,1,2-4s 3S1) oscillator strength values are 0.08 dex larger than the previous measurements.

  6. Assigning a Socio-Economic Status Value to Student Records: A Useful Tool for Planning, Reporting and Institutional Research

    ERIC Educational Resources Information Center

    Delaney, Julie; Tangtulyangkul, Ploy; McCormack, Robert

    2013-01-01

    In an educational context, the accurate determination of each student's socioeconomic status (SES) is important for planning, reporting and general institutional research. This article describes a project undertaken to develop the means to derive a proxy measure of students' SES, based on home address location and Australian Bureau of Statistics…

  7. Myocardial T1 mapping at 3.0 tesla using an inversion recovery spoiled gradient echo readout and bloch equation simulation with slice profile correction (BLESSPC) T1 estimation algorithm.

    PubMed

    Shao, Jiaxin; Rapacchi, Stanislas; Nguyen, Kim-Lien; Hu, Peng

    2016-02-01

    To develop an accurate and precise myocardial T1 mapping technique using an inversion recovery spoiled gradient echo readout at 3.0 Tesla (T). The modified Look-Locker inversion-recovery (MOLLI) sequence was modified to use fast low angle shot (FLASH) readout, incorporating a BLESSPC (Bloch Equation Simulation with Slice Profile Correction) T1 estimation algorithm, for accurate myocardial T1 mapping. The FLASH-MOLLI with BLESSPC fitting was compared with different approaches and sequences with regards to T1 estimation accuracy, precision and image artifact based on simulation, phantom studies, and in vivo studies of 10 healthy volunteers and three patients at 3.0 Tesla. The FLASH-MOLLI with BLESSPC fitting yields accurate T1 estimation (average error = -5.4 ± 15.1 ms, percentage error = -0.5% ± 1.2%) for T1 from 236-1852 ms and heart rate from 40-100 bpm in phantom studies. The FLASH-MOLLI sequence prevented off-resonance artifacts in all 10 healthy volunteers at 3.0T. In vivo, there was no significant difference between FLASH-MOLLI-derived myocardial T1 values and "ShMOLLI+IE" derived values (1458.9 ± 20.9 ms versus 1464.1 ± 6.8 ms, P = 0.50); However, the average precision by FLASH-MOLLI was significantly better than that generated by "ShMOLLI+IE" (1.84 ± 0.36% variance versus 3.57 ± 0.94%, P < 0.001). The FLASH-MOLLI with BLESSPC fitting yields accurate and precise T1 estimation, and eliminates banding artifacts associated with bSSFP at 3.0T. © 2015 Wiley Periodicals, Inc.

  8. Ion Mobility-Derived Collision Cross Section As an Additional Measure for Lipid Fingerprinting and Identification

    PubMed Central

    2014-01-01

    Despite recent advances in analytical and computational chemistry, lipid identification remains a significant challenge in lipidomics. Ion-mobility spectrometry provides an accurate measure of the molecules’ rotationally averaged collision cross-section (CCS) in the gas phase and is thus related to ionic shape. Here, we investigate the use of CCS as a highly specific molecular descriptor for identifying lipids in biological samples. Using traveling wave ion mobility mass spectrometry (MS), we measured the CCS values of over 200 lipids within multiple chemical classes. CCS values derived from ion mobility were not affected by instrument settings or chromatographic conditions, and they were highly reproducible on instruments located in independent laboratories (interlaboratory RSD < 3% for 98% of molecules). CCS values were used as additional molecular descriptors to identify brain lipids using a variety of traditional lipidomic approaches. The addition of CCS improved the reproducibility of analysis in a liquid chromatography-MS workflow and maximized the separation of isobaric species and the signal-to-noise ratio in direct-MS analyses (e.g., “shotgun” lipidomics and MS imaging). These results indicate that adding CCS to databases and lipidomics workflows increases the specificity and selectivity of analysis, thus improving the confidence in lipid identification compared to traditional analytical approaches. The CCS/accurate-mass database described here is made publicly available. PMID:25495617

  9. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  10. Land product validation of MODIS derived FPAR product over the tropical dry-forest of Santa Rosa National Park, Guanacaste, Costa Rica.

    NASA Astrophysics Data System (ADS)

    Sharp, Iain; Sanchez, Arturo

    2017-04-01

    Land-product validation of the MODIS derived FPAR product over the tropical dry-forest of Santa Rosa National Park, Guanacaste, Costa Rica. By Iain Sharp & Dr. Arturo Sanchez-Azofeifa In remote sensing, being able to ensure the accuracy of the satellite data being produced remains an issue; this is especially true for phenological variables such as the Fraction of Photosynthetically Active Radiation (FPAR). FPAR, which is considered an essential climate variable by the Global Terrestrial Observation System (GTOS), utilizes the 400-700 nm wavelength range to quantify the total amount of solar radiation available for photosynthetic use. It is a variable that is strongly influenced by the seasonal, diurnal, and optic properties of vegetation making it an accurate representation of vegetation health. Measurements of ground level FPAR can be completed using flux towers along with a limited number of wireless ground sensors, but due to the finite number and location of these towers, many research initiatives instead use the Moderate resolution Imaging Spectroradiometer (MODIS) FPAR product, which converts Leaf Area Index (LAI) to a FPAR value using Beer's Law. This is done despite there being little consensus on whether this is the best method to use for all ecosystems and vegetation types. One particular ecosystem that has had limited study to determine the accuracy of the MODIS derived FPAR products are the Tropical Dry Forests (TDFs) of Latin America. This ecosystem undergoes drastic seasonal changes from leaf off during the dry season to green-up during the wet seasons. This study aims to test the congruency between the MODIS derived FPAR values and ground-based FPAR values in relation to growing season length, growing season start and end dates, the peak and mean of FPAR values, and overall growth/phenological trends at the Santa Rosa National Park Environmental Monitoring Super Site (SR-EMSS) in Costa Rica and FPAR MODIS products. We derive our FPAR from a Wireless Sensor Network (WSN) consisting of more than 50 nodes measuring transmitted PAR, temperature, relative humidity, and soil moisture over custom time intervals ranging from 2-Hz to 15 min since 2013. Our fundamental goal is to demonstrate how accurate and reflective the MODIS derived FPAR product is of TDF phenology. This will be the first step taken in identifying potential problems with the MODIS derived FPAR products over TDFs in the Americas.

  11. Reference Values for Spirometry Derived Using Lambda, Mu, Sigma (LMS) Method in Korean Adults: in Comparison with Previous References.

    PubMed

    Jo, Bum Seak; Myong, Jun Pyo; Rhee, Chin Kook; Yoon, Hyoung Kyu; Koo, Jung Wan; Kim, Hyoung Ryoul

    2018-01-15

    The present study aimed to update the prediction equations for spirometry and their lower limits of normal (LLN) by using the lambda, mu, sigma (LMS) method and to compare the outcomes with the values of previous spirometric reference equations. Spirometric data of 10,249 healthy non-smokers (8,776 females) were extracted from the fourth and fifth versions of the Korea National Health and Nutrition Examination Survey (KNHANES IV, 2007-2009; V, 2010-2012). Reference equations were derived using the LMS method which allows modeling skewness (lambda [L]), mean (mu [M]), and coefficient of variation (sigma [S]). The outcome equations were compared with previous reference values. Prediction equations were presented in the following form: predicted value = e{a + b × ln(height) + c × ln(age) + M - spline}. The new predicted values for spirometry and their LLN derived using the LMS method were shown to more accurately reflect transitions in pulmonary function in young adults than previous prediction equations derived using conventional regression analysis in 2013. There were partial discrepancies between the new reference values and the reference values from the Global Lung Function Initiative in 2012. The results should be interpreted with caution for young adults and elderly males, particularly in terms of the LLN for forced expiratory volume in one second/forced vital capacity in elderly males. Serial spirometry follow-up, together with correlations with other clinical findings, should be emphasized in evaluating the pulmonary function of individuals. Future studies are needed to improve the accuracy of reference data and to develop continuous reference values for spirometry across all ages. © 2018 The Korean Academy of Medical Sciences.

  12. Relative importance of first and second derivatives of nuclear magnetic resonance chemical shifts and spin-spin coupling constants for vibrational averaging.

    PubMed

    Dracínský, Martin; Kaminský, Jakub; Bour, Petr

    2009-03-07

    Relative importance of anharmonic corrections to molecular vibrational energies, nuclear magnetic resonance (NMR) chemical shifts, and J-coupling constants was assessed for a model set of methane derivatives, differently charged alanine forms, and sugar models. Molecular quartic force fields and NMR parameter derivatives were obtained quantum mechanically by a numerical differentiation. In most cases the harmonic vibrational function combined with the property second derivatives provided the largest correction of the equilibrium values, while anharmonic corrections (third and fourth energy derivatives) were found less important. The most computationally expensive off-diagonal quartic energy derivatives involving four different coordinates provided a negligible contribution. The vibrational corrections of NMR shifts were small and yielded a convincing improvement only for very accurate wave function calculations. For the indirect spin-spin coupling constants the averaging significantly improved already the equilibrium values obtained at the density functional theory level. Both first and complete second shielding derivatives were found important for the shift corrections, while for the J-coupling constants the vibrational parts were dominated by the diagonal second derivatives. The vibrational corrections were also applied to some isotopic effects, where the corrected values reasonably well reproduced the experiment, but only if a full second-order expansion of the NMR parameters was included. Contributions of individual vibrational modes for the averaging are discussed. Similar behavior was found for the methane derivatives, and for the larger and polar molecules. The vibrational averaging thus facilitates interpretation of previous experimental results and suggests that it can make future molecular structural studies more reliable. Because of the lengthy numerical differentiation required to compute the NMR parameter derivatives their analytical implementation in future quantum chemistry packages is desirable.

  13. Hydrophilic interaction liquid chromatography of anthranilic acid-labelled oligosaccharides with a 4-aminobenzoic acid ethyl ester-labelled dextran hydrolysate internal standard.

    PubMed

    Neville, David C A; Alonzi, Dominic S; Butters, Terry D

    2012-04-13

    Hydrophilic interaction liquid chromatography (HILIC) of fluorescently labelled oligosaccharides is used in many laboratories to analyse complex oligosaccharide mixtures. Separations are routinely performed using a TSK gel-Amide 80 HPLC column, and retention times of different oligosaccharide species are converted to glucose unit (GU) values that are determined with reference to an external standard. However, if retention times were to be compared with an internal standard, consistent and more accurate GU values would be obtained. We present a method to perform internal standard-calibrated HILIC of fluorescently labelled oligosaccharides. The method relies on co-injection of 4-aminobenzoic acid ethyl ester (4-ABEE)-labelled internal standard and detection by UV absorption, with 2-AA (2-aminobenzoic acid)-labelled oligosaccharides. 4-ABEE is a UV chromophore and a fluorophore, but there is no overlap of the fluorescent spectrum of 4-ABEE with the commonly used fluorescent reagents. The dual nature of 4-ABEE allows for accurate calculation of the delay between UV and fluorescent signals when determining the GU values of individual oligosaccharides. The GU values obtained are inherently more accurate as slight differences in gradients that can influence retention are negated by use of an internal standard. Therefore, this paper provides the first method for determination of HPLC-derived GU values of fluorescently labelled oligosaccharides using an internal calibrant. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. The Science and Application of Satellite Based Fire Radiative Energy

    NASA Technical Reports Server (NTRS)

    Ellicott, Evan; Vermote, Eric (Editor)

    2012-01-01

    The accurate measurement of ecosystem biomass is of great importance in scientific, resource management and energy sectors. In particular, biomass is a direct measurement of carbon storage within an ecosystem and of great importance for carbon cycle science and carbon emission mitigation. Remote Sensing is the most accurate tool for global biomass measurements because of the ability to measure large areas. Current biomass estimates are derived primarily from ground-based samples, as compiled and reported in inventories and ecosystem samples. By using remote sensing technologies, we are able to scale up the sample values and supply wall to wall mapping of biomass.

  15. Accuracy of a Wrist-Worn Wearable Device for Monitoring Heart Rates in Hospital Inpatients: A Prospective Observational Study.

    PubMed

    Kroll, Ryan R; Boyd, J Gordon; Maslove, David M

    2016-09-20

    As the sensing capabilities of wearable devices improve, there is increasing interest in their application in medical settings. Capabilities such as heart rate monitoring may be useful in hospitalized patients as a means of enhancing routine monitoring or as part of an early warning system to detect clinical deterioration. To evaluate the accuracy of heart rate monitoring by a personal fitness tracker (PFT) among hospital inpatients. We conducted a prospective observational study of 50 stable patients in the intensive care unit who each completed 24 hours of heart rate monitoring using a wrist-worn PFT. Accuracy of heart rate recordings was compared with gold standard measurements derived from continuous electrocardiographic (cECG) monitoring. The accuracy of heart rates measured by pulse oximetry (Spo2.R) was also measured as a positive control. On a per-patient basis, PFT-derived heart rate values were slightly lower than those derived from cECG monitoring (average bias of -1.14 beats per minute [bpm], with limits of agreement of 24 bpm). By comparison, Spo2.R recordings produced more accurate values (average bias of +0.15 bpm, limits of agreement of 13 bpm, P<.001 as compared with PFT). Personal fitness tracker device performance was significantly better in patients in sinus rhythm than in those who were not (average bias -0.99 bpm vs -5.02 bpm, P=.02). Personal fitness tracker-derived heart rates were slightly lower than those derived from cECG monitoring in real-world testing and not as accurate as Spo2.R-derived heart rates. Performance was worse among patients who were not in sinus rhythm. Further clinical evaluation is indicated to see if PFTs can augment early warning systems in hospitals. ClinicalTrials.gov NCT02527408; https://clinicaltrials.gov/ct2/show/NCT02527408 (Archived by WebCite at  http://www.webcitation.org/6kOFez3on).

  16. Substituent and ring effects on enthalpies of formation: 2-methyl- and 2-ethylbenzimidazoles versus benzene- and imidazole-derivatives

    NASA Astrophysics Data System (ADS)

    Jiménez, Pilar; Roux, María Victoria; Dávalos, Juan Z.; Temprado, Manuel; Ribeiro da Silva, Manuel A. V.; Ribeiro da Silva, Maria Das Dores M. C.; Amaral, Luísa M. P. F.; Cabildo, Pilar; Claramunt, Rosa M.; Mó, Otilia; Yáñez, Manuel; Elguero, José

    The enthalpies of combustion, heat capacities, enthalpies of sublimation and enthalpies of formation of 2-methylbenzimidazole (2MeBIM) and 2-ethylbenzimidazole (2EtBIM) are reported and the results compared with those of benzimidazole itself (BIM). Theoretical estimates of the enthalpies of formation were obtained through the use of atom equivalent schemes. The necessary energies were obtained in single-point calculations at the B3LYP/6-311+G(d,p) on B3LYP/6-31G* optimized geometries. The comparison of experimental and calculated values of benzenes, imidazoles and benzimidazoles bearing H (unsubstituted), methyl and ethyl groups shows remarkable homogeneity. The energetic group contribution transferability is not followed, but either using it or adding an empirical interaction term, it is possible to generate an enormous collection of reasonably accurate data for different substituted heterocycles (pyrazole-derivatives, pyridine-derivatives, etc.) from the large amount of values available for substituted benzenes and those of the parent (pyrazole, pyridine) heterocycles.

  17. Diffusion and binding constants for acetylcholine derived from the falling phase of miniature endplate currents.

    PubMed Central

    Land, B R; Harris, W V; Salpeter, E E; Salpeter, M M

    1984-01-01

    In previous papers we studied the rising phase of a miniature endplate current (MEPC) to derive diffusion and forward rate constants controlling acetylcholine (AcCho) in the intact neuromuscular junction. The present study derives similar values (but with smaller error ranges) for these constants by including experimental results from the falling phase of the MEPC. We find diffusion to be 4 X 10(-6) cm2 s-1, slightly slower than free diffusion, forward binding to be 3.3 X 10(7) M-1 s-1, and the distance from an average release site to the nearest exit from the cleft to be 1.6 micron. We also estimate the back reaction rates. From our values we can accurately describe the shape of MEPCs under different conditions of receptor and esterase concentration. Since we suggest that unbinding is slower than isomerization, we further predict that there should be several short "closing flickers" during the total open time for an AcCho-ligated receptor channel. PMID:6584895

  18. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    PubMed

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  19. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  20. Heart rate time series characteristics for early detection of infections in critically ill patients.

    PubMed

    Tambuyzer, T; Guiza, F; Boonen, E; Meersseman, P; Vervenne, H; Hansen, T K; Bjerre, M; Van den Berghe, G; Berckmans, D; Aerts, J M; Meyfroidt, G

    2017-04-01

    It is difficult to make a distinction between inflammation and infection. Therefore, new strategies are required to allow accurate detection of infection. Here, we hypothesize that we can distinguish infected from non-infected ICU patients based on dynamic features of serum cytokine concentrations and heart rate time series. Serum cytokine profiles and heart rate time series of 39 patients were available for this study. The serum concentration of ten cytokines were measured using blood sampled every 10 min between 2100 and 0600 hours. Heart rate was recorded every minute. Ten metrics were used to extract features from these time series to obtain an accurate classification of infected patients. The predictive power of the metrics derived from the heart rate time series was investigated using decision tree analysis. Finally, logistic regression methods were used to examine whether classification performance improved with inclusion of features derived from the cytokine time series. The AUC of a decision tree based on two heart rate features was 0.88. The model had good calibration with 0.09 Hosmer-Lemeshow p value. There was no significant additional value of adding static cytokine levels or cytokine time series information to the generated decision tree model. The results suggest that heart rate is a better marker for infection than information captured by cytokine time series when the exact stage of infection is not known. The predictive value of (expensive) biomarkers should always be weighed against the routinely monitored data, and such biomarkers have to demonstrate added value.

  1. An attempt to obtain a detailed declination chart from the United States magnetic anomaly map

    USGS Publications Warehouse

    Alldredge, L.R.

    1989-01-01

    Modern declination charts of the United States show almost no details. It was hoped that declination details could be derived from the information contained in the existing magnetic anomaly map of the United States. This could be realized only if all of the survey data were corrected to a common epoch, at which time a main-field vector model was known, before the anomaly values were computed. Because this was not done, accurate declination values cannot be determined. In spite of this conclusion, declination values were computed using a common main-field model for the entire United States to see how well they compared with observed values. The computed detailed declination values were found to compare less favourably with observed values of declination than declination values computed from the IGRF 1985 model itself. -from Author

  2. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made in the analysis were addressed and fully investigated for their accuracy by using the three-dimensional electromagnetic simulation code MAFIA (Solution of Maxwell's Equations by the Finite Integration Algorithm) (refs. 3 and 4). We found that several approximations introduced significant error (ref. 5).

  3. The metallicity of M4: Accurate spectroscopic fundamental parameters for four giants

    NASA Technical Reports Server (NTRS)

    Drake, J. J.; Smith, V. V.; Suntzeff, N. B.

    1994-01-01

    High-quality spectra, covering the wavelength range 5480 to 7080 A, have been obtained for four giant stars in the intermediate-metallicity CN-bimodal globular cluster M4 (NGC 6121). We have employed a model atmosphere analysis that is entirely independent from cluster parameters, such as distance, age, and reddening, in order to derive accurate values for the stellar parameters effective temperature, surface gravity, and microturbulence, and for the abundance of iron relative to the Sun, (Fe/H), and of calcium, Ca/H, for each of the four stars. Detailed radiative transfer and statistical equilibrium calculations carried out for iron and calcium suggest that departures from local thermodynamic equilibrium are not significant for the purposes of our analysis. The spectroscopically derived effective temperatures for our program stars are hotter by about 200 K than existing photometric calibrations suggest. We conclude that this is due partly to the uncertain reddening of M4 and to the existing photometric temperature calibration for red giants being too cool by about 100 K. Comparison of our spectroscopic and existing photometric temperatures supports the prognosis of a significant east-west gradient in the reddening across M4. Our derived iron abundances are slightly higher than previous high-resolution studies suggested; the differences are most probably due to the different temperature scale and choice of microturbulent velocities adopted by earlier workers. The resulting value for the metallicity of M4 is (Fe/H )(sub M4) = -1.05 + or - 0.15. Based on this result, we suggest that metallicities derived in previous high-dispersion globular cluster abundance analyses could be too low by 0.2 to 0.3 dex. Our calcium abundances suggest an enhancement of calcium, an alpha element, over iron, relative to the Sun, in M4 of (Ca/H) = 0.23.

  4. Cerebral Aneurysm Clipping Surgery Simulation Using Patient-Specific 3D Printing and Silicone Casting.

    PubMed

    Ryan, Justin R; Almefty, Kaith K; Nakaji, Peter; Frakes, David H

    2016-04-01

    Neurosurgery simulator development is growing as practitioners recognize the need for improved instructional and rehearsal platforms to improve procedural skills and patient care. In addition, changes in practice patterns have decreased the volume of specific cases, such as aneurysm clippings, which reduces the opportunity for operating room experience. The authors developed a hands-on, dimensionally accurate model for aneurysm clipping using patient-derived anatomic data and three-dimensional (3D) printing. Design of the model focused on reproducibility as well as adaptability to new patient geometry. A modular, reproducible, and patient-derived medical simulacrum was developed for medical learners to practice aneurysmal clipping procedures. Various forms of 3D printing were used to develop a geometrically accurate cranium and vascular tree featuring 9 patient-derived aneurysms. 3D printing in conjunction with elastomeric casting was leveraged to achieve a patient-derived brain model with tactile properties not yet available from commercial 3D printing technology. An educational pilot study was performed to gauge simulation efficacy. Through the novel manufacturing process, a patient-derived simulacrum was developed for neurovascular surgical simulation. A follow-up qualitative study suggests potential to enhance current educational programs; assessments support the efficacy of the simulacrum. The proposed aneurysm clipping simulator has the potential to improve learning experiences in surgical environment. 3D printing and elastomeric casting can produce patient-derived models for a dynamic learning environment that add value to surgical training and preparation. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. An assessment of the near-surface accuracy of the international geomagnetic reference field 1980 model of the main geomagnetic field

    USGS Publications Warehouse

    Peddie, N.W.; Zunde, A.K.

    1985-01-01

    The new International Geomagnetic Reference Field (IGRF) model of the main geomagnetic field for 1980 is based heavily on measurements from the MAGSAT satellite survey. Assessment of the accuracy of the new model, as a description of the main field near the Earth's surface, is important because the accuracy of models derived from satellite data can be adversely affected by the magnetic field of electric currents in the ionosphere and the auroral zones. Until now, statements about its accuracy have been based on the 6 published assessments of the 2 proposed models from which it was derived. However, those assessments were either regional in scope or were based mainly on preliminary or extrapolated data. Here we assess the near-surface accuracy of the new model by comparing it with values for 1980 derived from annual means from 69 magnetic observatories, and by comparing it with WC80, a model derived from near-surface data. The comparison with observatory-derived data shows that the new model describes the field at the 69 observatories about as accurately as would a model derived solely from near-surface data. The comparison with WC80 shows that the 2 models agree closely in their description of D and I near the surface. These comparisons support the proposition that the new IGRF 1980 main-field model is a generally accurate description of the main field near the Earth's surface in 1980. ?? 1985.

  6. Velocity selection in coupled-map lattices

    NASA Astrophysics Data System (ADS)

    Parekh, Nita; Puri, Sanjay

    1993-02-01

    We investigate the phenomenon of velocity selection for traveling wave fronts in a class of coupled-map lattices, derived by discretizations of the Fisher equation [Ann. Eugenics 7, 355 (1937)]. We find that the velocity selection can be understood in terms of a discrete analog of the marginal-stability hypothesis. A perturbative approach also enables us to estimate the selected velocity accurately for small values of the discretization mesh sizes.

  7. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  8. Suomi NPP VIIRS solar diffuser screen transmittance model and its applications.

    PubMed

    Lei, Ning; Xiong, Xiaoxiong; Mcintire, Jeff

    2017-11-01

    The visible infrared imaging radiometer suite on the Suomi National Polar-orbiting Partnership satellite calibrates its reflective solar bands through observations of a sunlit solar diffuser (SD) panel. Sunlight passes through a perforated plate, referred to as the SD screen, before reaching the SD. It is critical to know whether the SD screen transmittance measured prelaunch is accurate. Several factors such as misalignments of the SD panel and the measurement apparatus could lead to errors in the measured transmittance and thus adversely impact on-orbit calibration quality through the SD. We develop a mathematical model to describe the transmittance as a function of the angles that incident light makes with the SD screen, and apply the model to fit the prelaunch measured transmittance. The results reveal that the model does not reproduce the measured transmittance unless the size of the apertures in the SD screen is quite different from the design value. We attribute the difference to the orientation alignment errors for the SD panel and the measurement apparatus. We model the alignment errors and apply our transmittance model to fit the prelaunch transmittance to retrieve the "true" transmittance. To use this model correctly, we also examine the finite source size effect on the transmittance. Furthermore, we compare the product of the retrieved "true" transmittance and the prelaunch SD bidirectional reflectance distribution function (BRDF) value to the value derived from on-orbit data to determine whether the prelaunch SD BRDF value is relatively accurate. The model is significant in that it can evaluate whether the SD screen transmittance measured prelaunch is accurate and help retrieve the true transmittance from the transmittance with measurement errors, consequently resulting in a more accurate sensor data product by the same amount.

  9. Quantitative analysis of hepatic fat fraction by single-breath-holding MR spectroscopy with T₂ correction: phantom and clinical study with histologic assessment.

    PubMed

    Hayashi, Norio; Miyati, Tosiaki; Minami, Takashi; Takeshita, Yumie; Ryu, Yasuji; Matsuda, Tsuyoshi; Ohno, Naoki; Hamaguchi, Takashi; Kato, Kenichiro; Takamura, Toshinari; Matsui, Osamu

    2013-01-01

    The focus of this study was on the investigation of the accuracy of the fat fraction of the liver by use of single-breath-holding magnetic resonance spectroscopy (MRS) with T (2) correction. Single-voxel proton MRS was performed with several TE values, and the fat fraction was determined with and without T (2) correction. MRS was also performed with use of the point-resolved spectroscopy sequence in single breath holding. The T (2) values of both water and fat were determined separately at the same time, and the effect of T (2) on the fat fraction was corrected. In addition, MRS-based fat fractions were compared with the degree of hepatic steatosis (HS) by liver biopsy in human subjects. With T (2) correction, the MRI-derived fat fractions were in good agreement with the fat fractions in all phantoms, but the fat fractions were overestimated without T (2) correction. R (2) values were in good agreement with the preset iron concentrations in the phantoms. The MRI-derived fat fraction was well correlated with the degree of HS. Iron deposited in the liver affects the signal strength when proton MRS is used for detection of the fat signal in the liver. However, the fat signal can be evaluated more accurately when the T (2) correction is applied. Breath-holding MRS minimizes the respiratory motion, and it can be more accurate in the quantification of the hepatic fat fraction.

  10. Theoretical Electric Dipole Moments and Dissociation Energies for the Ground States of GaH-BrH

    NASA Technical Reports Server (NTRS)

    Pettersson, Lars G. M.; Langhoff, Stephen R.

    1986-01-01

    Reliable experimental diople moments are available for the ground states of SeH and BrH whereas no values have been reported for GaH and AsH a recently reported experimental dipole moment for GeH of 1.24 + or -0.01 D has been seriously questioned, and a much lower value of, 0.1 + or - 0.05 D, suggested. In this work, we report accurate theoretical dipole moments, dipole derivatives, dissociation energies, and spectroscopic constants (tau(sub e), omega(sub e)) for the ground states of GaH through BrH.

  11. Scale-model charge-transfer technique for measuring enhancement factors

    NASA Technical Reports Server (NTRS)

    Kositsky, J.; Nanevicz, J. E.

    1991-01-01

    Determination of aircraft electric field enhancement factors is crucial when using airborne field mill (ABFM) systems to accurately measure electric fields aloft. SRI used the scale model charge transfer technique to determine enhancement factors of several canonical shapes and a scale model Learjet 36A. The measured values for the canonical shapes agreed with known analytic solutions within about 6 percent. The laboratory determined enhancement factors for the aircraft were compared with those derived from in-flight data gathered by a Learjet 36A outfitted with eight field mills. The values agreed to within experimental error (approx. 15 percent).

  12. Accurate procedure for deriving UTI at a submilliarcsecond accuracy from Greenwich Sidereal Time or from the stellar angle

    NASA Astrophysics Data System (ADS)

    Capitaine, N.; Gontier, A.-M.

    1993-08-01

    Present observations using modern astrometric techniques are supposed to provide the Earth orientation parameters, and therefore UT1, with an accuracy better than ±1 mas. In practice, UT1 is determined through the intermediary of Greenwich Sidereal Time (GST), using both the conventional relationship between Greenwich Mean Sidereal Time (GMST) and UTl (Aoki et al. 1982) and the so-called "equation of the equinoxes" limited to the first order terms with respect to the nutation quantities. This highly complex relation between sidereal time and UT1 is not accurate at the milliaresecond level which gives rise to spurious terms of milliaresecond amplitude in the derived UTl. A more complete relationship between GST and UT1 has been recommended by Aoki & Kinoshita (1983) and Aoki (1991) taking into account the second order terms in the difference between GST and GM ST, the largest one having an amplitude of 2.64 mas and a 18.6 yr-period. This paper explains how this complete expansion of GST implicitly uses the concept of "nonrotating origin" (NRO) as proposed by Guinot in 1979 and would, therefore, provide a more accurate value of UTl and consequently of the Earth's angular velocity. This paper shows, moreover, that such a procedure would be simplified and conceptually clarified by the explicit use of the NRO as previously proposed (Guinot 1979; Capitaine et al. 1986). The two corresponding options (implicit or explicit use of the NRO) are shown to be equivalent for defining the specific Earth's angle of rotation and then UT1. The of the use of such an accurate procedure which has been proposed in the new IERS standards (McCarthy 1992a) instead of the usual one are estimated for the practical derivation of UT1.

  13. Higher-Order Hamiltonian Model for Unidirectional Water Waves

    NASA Astrophysics Data System (ADS)

    Bona, J. L.; Carvajal, X.; Panthee, M.; Scialom, M.

    2018-04-01

    Formally second-order correct, mathematical descriptions of long-crested water waves propagating mainly in one direction are derived. These equations are analogous to the first-order approximations of KdV- or BBM-type. The advantage of these more complex equations is that their solutions corresponding to physically relevant initial perturbations of the rest state may be accurate on a much longer timescale. The initial value problem for the class of equations that emerges from our derivation is then considered. A local well-posedness theory is straightforwardly established by a contraction mapping argument. A subclass of these equations possess a special Hamiltonian structure that implies the local theory can be continued indefinitely.

  14. Survival curves of Listeria monocytogenes in chorizos modeled with artificial neural networks.

    PubMed

    Hajmeer, M; Basheer, I; Cliver, D O

    2006-09-01

    Using artificial neural networks (ANNs), a highly accurate model was developed to simulate survival curves of Listeria monocytogenes in chorizos as affected by the initial water activity (a(w0)) of the sausage formulation, temperature (T), and air inflow velocity (F) where the sausages are stored. The ANN-based survival model (R(2)=0.970) outperformed the regression-based cubic model (R(2)=0.851), and as such was used to derive other models (using regression) that allow prediction of the times needed to drop count by 1, 2, 3, and 4 logs (i.e., nD-values, n=1, 2, 3, 4). The nD-value regression models almost perfectly predicted the various times derived from a number of simulated survival curves exhibiting a wide variety of the operating conditions (R(2)=0.990-0.995). The nD-values were found to decrease with decreasing a(w0), and increasing T and F. The influence of a(w0) on nD-values seems to become more significant at some critical value of a(w0), below which the variation is negligible (0.93 for 1D-value, 0.90 for 2D-value, and <0.85 for 3D- and 4D-values). There is greater influence of storage T and F on 3D- and 4D-values than on 1D- and 2D-values.

  15. Multicenter Comparison of Machine Learning Methods and Conventional Regression for Predicting Clinical Deterioration on the Wards.

    PubMed

    Churpek, Matthew M; Yuen, Trevor C; Winslow, Christopher; Meltzer, David O; Kattan, Michael W; Edelson, Dana P

    2016-02-01

    Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. Observational cohort study. Five hospitals, from November 2008 until January 2013. Hospitalized ward patients None Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.

  16. Modeling marbled murrelet (Brachyramphus marmoratus) habitat using LiDAR-derived canopy data

    USGS Publications Warehouse

    Hagar, Joan C.; Eskelson, Bianca N.I.; Haggerty, Patricia K.; Nelson, S. Kim; Vesely, David G.

    2014-01-01

    LiDAR (Light Detection And Ranging) is an emerging remote-sensing tool that can provide fine-scale data describing vertical complexity of vegetation relevant to species that are responsive to forest structure. We used LiDAR data to estimate occupancy probability for the federally threatened marbled murrelet (Brachyramphus marmoratus) in the Oregon Coast Range of the United States. Our goal was to address the need identified in the Recovery Plan for a more accurate estimate of the availability of nesting habitat by developing occupancy maps based on refined measures of nest-strand structure. We used murrelet occupancy data collected by the Bureau of Land Management Coos Bay District, and canopy metrics calculated from discrete return airborne LiDAR data, to fit a logistic regression model predicting the probability of occupancy. Our final model for stand-level occupancy included distance to coast, and 5 LiDAR-derived variables describing canopy structure. With an area under the curve value (AUC) of 0.74, this model had acceptable discrimination and fair agreement (Cohen's κ = 0.24), especially considering that all sites in our sample were regarded by managers as potential habitat. The LiDAR model provided better discrimination between occupied and unoccupied sites than did a model using variables derived from Gradient Nearest Neighbor maps that were previously reported as important predictors of murrelet occupancy (AUC = 0.64, κ = 0.12). We also evaluated LiDAR metrics at 11 known murrelet nest sites. Two LiDAR-derived variables accurately discriminated nest sites from random sites (average AUC = 0.91). LiDAR provided a means of quantifying 3-dimensional canopy structure with variables that are ecologically relevant to murrelet nesting habitat, and have not been as accurately quantified by other mensuration methods.

  17. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  18. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  19. Use of the DISST Model to Estimate the HOMA and Matsuda Indexes Using Only a Basal Insulin Assay

    PubMed Central

    Docherty, Paul D.; Chase, J. Geoffrey

    2014-01-01

    Background: It is hypothesized that early detection of reduced insulin sensitivity (SI) could prompt intervention that may reduce the considerable financial strain type 2 diabetes mellitus (T2DM) places on global health care. Reduction of the cost of already inexpensive SI metrics such as the Matsuda and HOMA indexes would enable more widespread, economically feasible use of these metrics for screening. The goal of this research was to determine a means of reducing the number of insulin samples and therefore the cost required to provide an accurate Matsuda Index value. Method: The Dynamic Insulin Sensitivity and Secretion Test (DISST) model was used with the glucose and basal insulin measurements from an Oral Glucose Tolerance Test (OGTT) to predict patient insulin responses. The insulin response to the OGTT was determined via population based regression analysis that incorporated the 60-minute glucose and basal insulin values. Results: The proposed method derived accurate and precise Matsuda Indices as compared to the fully sampled Matsuda (R = .95) using only the basal assay insulin-level data and 4 glucose measurements. Using a model employing the basal insulin also allows for determination of the 1-day HOMA value. Conclusion: The DISST model was successfully modified to allow for the accurate prediction an individual’s insulin response to the OGTT. In turn, this enabled highly accurate and precise estimation of a Matsuda Index using only the glucose and basal insulin assays. As insulin assays account for the majority of the cost of the Matsuda Index, this model offers a significant reduction in assay cost. PMID:24876431

  20. Validity of Predicting Left Ventricular End Systolic Pressure Changes Following An Acute Bout of Exercise

    PubMed Central

    Kappus, Rebecca M.; Ranadive, Sushant M.; Yan, Huimin; Lane, Abbi D.; Cook, Marc D.; Hall, Grenita; Harvey, I. Shevon; Wilund, Kenneth R.; Woods, Jeffrey A.; Fernhall, Bo

    2012-01-01

    Objective Left ventricular end systolic pressure (LV ESP) is important in assessing left ventricular performance. LV ESP is usually derived from prediction equations. It is unknown whether these equations are accurate at rest or following exercise in a young, healthy population. Design We compared measured LV ESP versus LV ESP values from the prediction equations at rest, 15 minutes and 30 minutes following peak aerobic exercise in 60 participants. Methods LV ESP was obtained by applanation tonometry at rest, 15 minutes post and 30 minutes post peak cycle exercise. Results Measured LV ESP was significantly lower (p<0.05) at all time points in comparison to the two calculated values. Measured LV ESP decreased significantly from rest at both the post15 and post30 time points (p<0.05) and changed differently in comparison to the calculated values (significant interaction; p<0.05). The two LV ESP equations were also significantly different from each other (p<0.05) and changed differently over time (significant interaction; p<0.05). Conclusions These data indicate that the two prediction equations commonly used did not accurately predict either resting or post exercise LV ESP in a young, healthy population. Thus, LV ESP needs to be individually determined in young healthy participants. Non-invasive measurement through applanation tonometry appears to allow for a more accurate determination of LV ESP. PMID:22721862

  1. Validity of predicting left ventricular end systolic pressure changes following an acute bout of exercise.

    PubMed

    Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane, Abbi D; Cook, Marc D; Hall, Grenita; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo

    2013-01-01

    Left ventricular end systolic pressure (LV ESP) is important in assessing left ventricular performance and is usually derived from prediction equations. It is unknown whether these equations are accurate at rest or following exercise in a young, healthy population. Measured LV ESP vs. LV ESP values from the prediction equations were compared at rest, 15 min and 30 min following peak aerobic exercise in 60 participants. LV ESP was obtained by applanation tonometry at rest, 15 min post and 30 min post peak cycle exercise. Measured LV ESP was significantly lower (p<0.05) at all time points in comparison to the two calculated values. Measured LV ESP decreased significantly from rest at both the post15 and post30 time points (p<0.05) and changed differently in comparison to the calculated values (significant interaction; p<0.05). The two LV ESP equations were also significantly different from each other (p<0.05) and changed differently over time (significant interaction; p<0.05). The two commonly used prediction equations did not accurately predict either resting or post exercise LV ESP in a young, healthy population. Thus, LV ESP needs to be individually determined in young, healthy participants. Non-invasive measurement through applanation tonometry appears to allow for a more accurate determination of LV ESP. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  2. Magnetic Moment Quantifications of Small Spherical Objects in MRI

    PubMed Central

    Cheng, Yu-Chung N.; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin

    2014-01-01

    Purpose The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Methods Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5 T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Results Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. Conclusion An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. PMID:25490517

  3. Magnetic moment quantifications of small spherical objects in MRI.

    PubMed

    Cheng, Yu-Chung N; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin

    2015-07-01

    The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values, and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Estimating mineral abundances of clay and gypsum mixtures using radiative transfer models applied to visible-near infrared reflectance spectra

    NASA Astrophysics Data System (ADS)

    Robertson, K. M.; Milliken, R. E.; Li, S.

    2016-10-01

    Quantitative mineral abundances of lab derived clay-gypsum mixtures were estimated using a revised Hapke VIS-NIR and Shkuratov radiative transfer model. Montmorillonite-gypsum mixtures were used to test the effectiveness of the model in distinguishing between subtle differences in minor absorption features that are diagnostic of mineralogy in the presence of strong H2O absorptions that are not always diagnostic of distinct phases or mineral abundance. The optical constants (k-values) for both endmembers were determined from bi-directional reflectance spectra measured in RELAB as well as on an ASD FieldSpec3 in a controlled laboratory setting. Multiple size fractions were measured in order to derive a single k-value from optimization of the optical path length in the radiative transfer models. It is shown that with careful experimental conditions, optical constants can be accurately determined from powdered samples using a field spectrometer, consistent with previous studies. Variability in the montmorillonite hydration level increased the uncertainties in the derived k-values, but estimated modal abundances for the mixtures were still within 5% of the measured values. Results suggest that the Hapke model works well in distinguishing between hydrated phases that have overlapping H2O absorptions and it is able to detect gypsum and montmorillonite in these simple mixtures where they are present at levels of ∼10%. Care must be taken however to derive k-values from a sample with appropriate H2O content relative to the modeled spectra. These initial results are promising for the potential quantitative analysis of orbital remote sensing data of hydrated minerals, including more complex clay and sulfate assemblages such as mudstones examined by the Curiosity rover in Gale crater.

  5. Similarity solution of the Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Lockington, D. A.; Parlange, J.-Y.; Parlange, M. B.; Selker, J.

    Similarity transforms of the Boussinesq equation in a semi-infinite medium are available when the boundary conditions are a power of time. The Boussinesq equation is reduced from a partial differential equation to a boundary-value problem. Chen et al. [Trans Porous Media 1995;18:15-36] use a hodograph method to derive an integral equation formulation of the new differential equation which they solve by numerical iteration. In the present paper, the convergence of their scheme is improved such that numerical iteration can be avoided for all practical purposes. However, a simpler analytical approach is also presented which is based on Shampine's transformation of the boundary value problem to an initial value problem. This analytical approximation is remarkably simple and yet more accurate than the analytical hodograph approximations.

  6. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo.

    PubMed

    Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali

    2014-12-28

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.

  7. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  8. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overy, Catherine; Blunt, N. S.; Shepherd, James J.

    2014-12-28

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamicmore » itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.« less

  9. Parametric optimization of optical signal detectors employing the direct photodetection scheme

    NASA Astrophysics Data System (ADS)

    Kirakosiants, V. E.; Loginov, V. A.

    1984-08-01

    The problem of optimization of the optical signal detection scheme parameters is addressed using the concept of a receiver with direct photodetection. An expression is derived which accurately approximates the field of view (FOV) values obtained by a direct computer minimization of the probability of missing a signal; optimum values of the receiver FOV were found for different atmospheric conditions characterized by the number of coherence spots and the intensity fluctuations of a plane wave. It is further pointed out that the criterion presented can be possibly used for parametric optimization of detectors operating in accordance with the Neumann-Pearson criterion.

  10. Noniterative accurate algorithm for the exact exchange potential of density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2007-10-15

    An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less

  11. Determination of deuteron quadrupole moment from calculations of the electric field gradient in D{sub 2} and HD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavanello, Michele; Tung Weicheng; Adamowicz, Ludwik

    2010-04-15

    We have carried out an accurate determination of the quadrupole moment of the deuteron nucleus. The evaluation of the constant is achieved by combining high accuracy Born-Oppenheimer calculations of the electric field gradient at the nucleus in the H{sub 2} molecule with spectroscopic measurements of the quadrupolar splitting in D{sub 2} and HD. The derived value is Q=0.285783(30) fm{sup 2}.

  12. Multiple types of synchronization analysis for discontinuous Cohen-Grossberg neural networks with time-varying delays.

    PubMed

    Li, Jiarong; Jiang, Haijun; Hu, Cheng; Yu, Zhiyong

    2018-03-01

    This paper is devoted to the exponential synchronization, finite time synchronization, and fixed-time synchronization of Cohen-Grossberg neural networks (CGNNs) with discontinuous activations and time-varying delays. Discontinuous feedback controller and Novel adaptive feedback controller are designed to realize global exponential synchronization, finite time synchronization and fixed-time synchronization by adjusting the values of the parameters ω in the controller. Furthermore, the settling time of the fixed-time synchronization derived in this paper is less conservative and more accurate. Finally, some numerical examples are provided to show the effectiveness and flexibility of the results derived in this paper. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. A system of equations to approximate the pharmacokinetic parameters of lacosamide at steady state from one plasma sample.

    PubMed

    Cawello, Willi; Schäfer, Carina

    2014-08-01

    Frequent plasma sampling to monitor pharmacokinetic (PK) profile of antiepileptic drugs (AEDs), is invasive, costly and time consuming. For drugs with a well-defined PK profile, such as AED lacosamide, equations can accurately approximate PK parameters from one steady-state plasma sample. Equations were derived to approximate steady-state peak and trough lacosamide plasma concentrations (Cpeak,ss and Ctrough,ss, respectively) and area under concentration-time curve during dosing interval (AUCτ,ss) from one plasma sample. Lacosamide (ka: ∼2 h(-1); ke: ∼0.05 h(-1), corresponding to half-life of 13 h) was calculated to reach Cpeak,ss after ∼1 h (tmax,ss). Equations were validated by comparing approximations to reference PK parameters obtained from single plasma samples drawn 3-12h following lacosamide administration, using data from double-blind, placebo-controlled, parallel-group PK study. Values of relative bias (accuracy) between -15% and +15%, and root mean square error (RMSE) values≤15% (precision) were considered acceptable for validation. Thirty-five healthy subjects (12 young males; 11 elderly males, 12 elderly females) received lacosamide 100mg/day for 4.5 days. Equation-derived PK values were compared to reference mean Cpeak,ss, Ctrough,ss and AUCτ,ss values. Equation-derived PK data had a precision of 6.2% and accuracy of -8.0%, 2.9%, and -0.11%, respectively. Equation-derived versus reference PK values for individual samples obtained 3-12h after lacosamide administration showed correlation (R2) range of 0.88-0.97 for AUCτ,ss. Correlation range for Cpeak,ss and Ctrough,ss was 0.65-0.87. Error analyses for individual sample comparisons were independent of time. Derived equations approximated lacosamide Cpeak,ss, Ctrough,ss and AUCτ,ss using one steady-state plasma sample within validation range. Approximated PK parameters were within accepted validation criteria when compared to reference PK values. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Calculating LOAEL/NOAEL uncertainty factors for wildlife species in ecological risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suedel, B.C.; Clifford, P.A.; Ludwig, D.F.

    1995-12-31

    Terrestrial ecological risk assessments frequently require derivation of NOAELs or toxicity reference values (TRVS) against which to compare exposure estimates. However, much of the available information from the literature is LOAELS, not NOAELS. Lacking specific guidance, arbitrary factors of ten are sometimes employed for extrapolating NOAELs from LOAELs. In this study, the scientific literature was searched to obtain chronic and subchronic studies reporting NOAEL and LOAEL data for wildlife and laboratory species. Results to date indicate a mean conversion factor of 4.0 ({+-} 2.61 S.D.), with a minimum of 1. 6 and a maximum of 10 for 106 studies acrossmore » several classes of compounds (I.e., metals, pesticides, volatiles, etc.). These data suggest that an arbitrary factor of 10 conversion factor is unnecessarily restrictive for extrapolating NOAELs from LOAELs and that a factor of 4--5 would be more realistic for deriving toxicity reference values for wildlife species. Applying less arbitrary and more realistic conversion factors in ecological risk assessments will allow for a more accurate estimate of NOAEL values for assessing risk to wildlife populations.« less

  15. Photolysis Rate Coefficient Calculations in Support of SOLVE Campaign

    NASA Technical Reports Server (NTRS)

    Lloyd, Steven A.; Swartz, William H.

    2001-01-01

    The objectives for this SOLVE project were 3-fold. First, we sought to calculate a complete set of photolysis rate coefficients (j-values) for the campaign along the ER-2 and DC-8 flight tracks. En route to this goal, it would be necessary to develop a comprehensive set of input geophysical conditions (e.g., ozone profiles), derived from various climatological, aircraft, and remotely sensed datasets, in order to model the radiative transfer of the atmosphere accurately. These j-values would then need validation by comparison with flux-derived j-value measurements. The second objective was to analyze chemistry along back trajectories using the NASA/Goddard chemistry trajectory model initialized with measurements of trace atmospheric constituents. This modeling effort would provide insight into the completeness of current measurements and the chemistry of Arctic wintertime ozone loss. Finally, we sought to coordinate stellar occultation measurements of ozone (and thus ozone loss) during SOLVE using the MSX/UVISI satellite instrument. Such measurements would determine ozone loss during the Arctic polar night and represent the first significant science application of space-based stellar occultation in the Earth's atmosphere.

  16. School system evaluation by value added analysis under endogeneity.

    PubMed

    Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien

    2014-01-01

    Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.

  17. The 3D elevation program - Precision agriculture and other farm practices

    USGS Publications Warehouse

    Sugarbaker, Larry J.; Carswell, Jr., William J.

    2016-12-27

    A founding motto of the Natural Resources Conservation Service (NRCS), originally the Soil Conservation Service (SCS), explains that “If we take care of the land, it will take care of us.” Digital elevation models (DEMs; see fig. 1) are derived from light detection and ranging (lidar) data and can be processed to derive values such as slope angle, aspect, and topographic curvature. These three measurements are the principal parameters of the NRCS LidarEnhanced Soil Survey (LESS) model, which improves the precision of soil surveys, by more accurately displaying the slopes and soils patterns, while increasing the objectivity and science in line placement. As combined resources, DEMs, LESS model outputs, and similar derived datasets are essential for conserving soil, wetlands, and other natural resources managed and overseen by the NRCS and other Federal and State agencies.

  18. 76 FR 54270 - Self-Regulatory Organizations; Chicago Mercantile Exchange, Inc.; Notice of Filing and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... market transparency for over-the-counter derivatives markets, promoting the prompt and accurate clearance... derivatives clearing organization under the Commodity Exchange Act (``CEA'') and do not significantly affect... clearing agency be designed to promote the prompt and accurate clearance and settlement of derivative...

  19. Highly efficient classification and identification of human pathogenic bacteria by MALDI-TOF MS.

    PubMed

    Hsieh, Sen-Yung; Tseng, Chiao-Li; Lee, Yun-Shien; Kuo, An-Jing; Sun, Chien-Feng; Lin, Yen-Hsiu; Chen, Jen-Kun

    2008-02-01

    Accurate and rapid identification of pathogenic microorganisms is of critical importance in disease treatment and public health. Conventional work flows are time-consuming, and procedures are multifaceted. MS can be an alternative but is limited by low efficiency for amino acid sequencing as well as low reproducibility for spectrum fingerprinting. We systematically analyzed the feasibility of applying MS for rapid and accurate bacterial identification. Directly applying bacterial colonies without further protein extraction to MALDI-TOF MS analysis revealed rich peak contents and high reproducibility. The MS spectra derived from 57 isolates comprising six human pathogenic bacterial species were analyzed using both unsupervised hierarchical clustering and supervised model construction via the Genetic Algorithm. Hierarchical clustering analysis categorized the spectra into six groups precisely corresponding to the six bacterial species. Precise classification was also maintained in an independently prepared set of bacteria even when the numbers of m/z values were reduced to six. In parallel, classification models were constructed via Genetic Algorithm analysis. A model containing 18 m/z values accurately classified independently prepared bacteria and identified those species originally not used for model construction. Moreover bacteria fewer than 10(4) cells and different species in bacterial mixtures were identified using the classification model approach. In conclusion, the application of MALDI-TOF MS in combination with a suitable model construction provides a highly accurate method for bacterial classification and identification. The approach can identify bacteria with low abundance even in mixed flora, suggesting that a rapid and accurate bacterial identification using MS techniques even before culture can be attained in the near future.

  20. Ventriculostomy Simulation Using Patient-Specific Ventricular Anatomy, 3D Printing, and Hydrogel Casting.

    PubMed

    Ryan, Justin R; Chen, Tsinsue; Nakaji, Peter; Frakes, David H; Gonzalez, L Fernando

    2015-11-01

    Educational simulators provide a means for students and experts to learn and refine surgical skills. Educators can leverage the strengths of medical simulators to effectively teach complex and high-risk surgical procedures, such as placement of an external ventricular drain. Our objective was to develop a cost-effective, patient-derived medical simulacrum for cerebral lateral ventriculostomy. A cost-effective, patient-derived medical simulacrum was developed for placement of an external lateral ventriculostomy. Elastomeric and gel casting techniques were used to achieve realistic brain geometry and material properties. 3D printing technology was leveraged to develop accurate cranial properties and dimensions. An economical, gravity-driven pump was developed to provide normal and abnormal ventricular pressures. A small pilot study was performed to gauge simulation efficacy using a technology acceptance model. An accurate geometric representation of the brain was developed with independent lateral cerebral ventricular chambers. A gravity-driven pump pressurized the ventricular cavities to physiologic values. A qualitative study illustrated that the simulation has potential as an educational tool to train medical professionals in the ventriculostomy procedure. The ventricular simulacrum can improve learning in a medical education environment. Rapid prototyping and multi-material casting techniques can produce patient-derived models for cost-effective and realistic surgical training scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. A Neural Network Model for K(λ) Retrieval and Application to Global Kpar Monitoring.

    PubMed

    Chen, Jun; Zhu, Yuanli; Wu, Yongsheng; Cui, Tingwei; Ishizaka, Joji; Ju, Yongtao

    2015-01-01

    Accurate estimation of diffuse attenuation coefficients in the visible wavelengths Kd(λ) from remotely sensed data is particularly challenging in global oceanic and coastal waters. The objectives of the present study are to evaluate the applicability of a semi-analytical Kd(λ) retrieval model (SAKM) and Jamet's neural network model (JNNM), and then develop a new neural network Kd(λ) retrieval model (NNKM). Based on the comparison of Kd(λ) predicted by these models with in situ measurements taken from the global oceanic and coastal waters, all of the NNKM, SAKM, and JNNM models work well in Kd(λ) retrievals, but the NNKM model works more stable and accurate than both SAKM and JNNM models. The near-infrared band-based and shortwave infrared band-based combined model is used to remove the atmospheric effects on MODIS data. The Kd(λ) data was determined from the atmospheric corrected MODIS data using the NNKM, JNNM, and SAKM models. The results show that the NNKM model produces <30% uncertainty in deriving Kd(λ) from global oceanic and coastal waters, which is 4.88-17.18% more accurate than SAKM and JNNM models. Furthermore, we employ an empirical approach to calculate Kpar from the NNKM model-derived diffuse attenuation coefficient at visible bands (443, 488, 555, and 667 nm). The results show that our model presents a satisfactory performance in deriving Kpar from the global oceanic and coastal waters with 20.2% uncertainty. The Kpar are quantified from MODIS data atmospheric correction using our model. Comparing with field measurements, our model produces ~31.0% uncertainty in deriving Kpar from Bohai Sea. Finally, the applicability of our model for general oceanographic studies is briefly illuminated by applying it to climatological monthly mean remote sensing reflectance for time ranging from July, 2002- July 2014 at the global scale. The results indicate that the high Kd(λ) and Kpar values are usually found around the coastal zones in the high latitude regions, while low Kd(λ) and Kpar values are usually found in the open oceans around the low-latitude regions. These results could improve our knowledge about the light field under waters at either the global or basin scales, and be potentially used into general circulation models to estimate the heat flux between atmosphere and ocean.

  2. Remote Characterization of Biomass Measurements: Case Study of Mangrove Forests

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola E.

    2010-01-01

    Accurately quantifying forest biomass is of crucial importance for climate change studies. By quantifying the amount of above and below ground biomass and consequently carbon stored in forest ecosystems, we are able to derive estimates of carbon sequestration, emission and storage and help close the carbon budget. Mangrove forests, in addition to providing habitat and nursery grounds for over 1300 animal species, are also an important sink of biomass. Although they only constitute about 3% of the total forested area globally, their carbon storage capacity -- in forested biomass and soil carbon -- is greater than that of tropical forests (Lucas et al, 2007). In addition, the amount of mangrove carbon -- in the form of litter and leaves exported into offshore areas is immense, resulting in over 10% of the ocean's dissolved organic carbon originating from mangroves (Dittmar et al, 2006) The measurement of forest above ground biomass is carried out on two major scales: on the plot scale, biomass can be measured using field measurements through allometric equation derivation and measurements of forest plots. On the larger scale, the field data are used to calibrate remotely sensed data to obtain stand-wide or even regional estimates of biomass. Currently, biomass can be calculated using average stand biomass values and optical data, such as aerial photography or satellite images (Landsat, Modis, Ikonos, SPOT, etc.). More recent studies have concentrated on deriving forest biomass values using radar (JERS, SIR-C, SRTM, Airsar) and/or lidar (ICEsat/GLAS, LVIS) active remote sensing to retrieve more accurate and detailed measurements of forest biomass. The implementation of a generation of new active sensors (UAVSar, DesdynI, Alos/Palsar, TerraX) has prompted the development of new tecm'liques of biomass estimation that use the combination of multiple sensors and datasets, to quantify past, current and future biomass stocks. Focusing on mangrove forest biomass estimation, this book chapter has 3 main objectives: a) To describe in detail the field methodologies used to derive accurate estimates of biomass in mangrove forests b) To explain how mangrove forest biomass can be measured using several remote sensing techniques and datasets c) To give a detailed explanation of the measurement challenges and errors that arise in each estimate of forest biomass

  3. Minor elements in lunar olivine as a petrologic indicator

    NASA Technical Reports Server (NTRS)

    Steele, I. M.; Smith, J. V.

    1975-01-01

    Accurate electron microprobe analyses (approximately 50 ppm) were made for Al, Ca, Ti, Cr, Mn, and Ni in Mg-rich olivines which may derive from early lunar crust or deeper environments. Low-Ca contents consistently occur only in olivines from dunitic and troctolitic breccia: spinel troctolite and other rock types have high-Ca olivines suggesting derivation by near-surface processes. Rock 15445 has olivine with distinctly low CaO (approximately 0.01 wt.%). Chromium ranges to higher values (max.0.2 oxide wt.%) than for terrestrial harzburgites and lherzolites but is similar to the range in terrestrial komatiites. Divalent chromium may be indicated over trivalent Cr because olivines lack sufficient other elements for charge balance of the latter. NiO values in lunar specimens range from 0.00 to 0.07 wt.% and a weak anticorrelation with Cr2O3 suggests an oxidation state effect. Al2O3 values are mostly below 0.04-wt.% and show no obvious correlation with fragment type. TiO2 values lie below 0.13-wt.% and seem to correlate best with crystallization rate and plagioclase content of the host rock. High values of Al2O3 and TiO2 reported by other workers have not been confirmed, and are probably wrong.

  4. s -wave scattering length of a Gaussian potential

    NASA Astrophysics Data System (ADS)

    Jeszenszki, Peter; Cherny, Alexander Yu.; Brand, Joachim

    2018-04-01

    We provide accurate expressions for the s -wave scattering length for a Gaussian potential well in one, two, and three spatial dimensions. The Gaussian potential is widely used as a pseudopotential in the theoretical description of ultracold-atomic gases, where the s -wave scattering length is a physically relevant parameter. We first describe a numerical procedure to compute the value of the s -wave scattering length from the parameters of the Gaussian, but find that its accuracy is limited in the vicinity of singularities that result from the formation of new bound states. We then derive simple analytical expressions that capture the correct asymptotic behavior of the s -wave scattering length near the bound states. Expressions that are increasingly accurate in wide parameter regimes are found by a hierarchy of approximations that capture an increasing number of bound states. The small number of numerical coefficients that enter these expressions is determined from accurate numerical calculations. The approximate formulas combine the advantages of the numerical and approximate expressions, yielding an accurate and simple description from the weakly to the strongly interacting limit.

  5. Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data

    NASA Astrophysics Data System (ADS)

    Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.

    2015-06-01

    In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.

  6. DNA copy number, including telomeres and mitochondria, assayed using next-generation sequencing.

    PubMed

    Castle, John C; Biery, Matthew; Bouzek, Heather; Xie, Tao; Chen, Ronghua; Misura, Kira; Jackson, Stuart; Armour, Christopher D; Johnson, Jason M; Rohl, Carol A; Raymond, Christopher K

    2010-04-16

    DNA copy number variations occur within populations and aberrations can cause disease. We sought to develop an improved lab-automatable, cost-efficient, accurate platform to profile DNA copy number. We developed a sequencing-based assay of nuclear, mitochondrial, and telomeric DNA copy number that draws on the unbiased nature of next-generation sequencing and incorporates techniques developed for RNA expression profiling. To demonstrate this platform, we assayed UMC-11 cells using 5 million 33 nt reads and found tremendous copy number variation, including regions of single and homogeneous deletions and amplifications to 29 copies; 5 times more mitochondria and 4 times less telomeric sequence than a pool of non-diseased, blood-derived DNA; and that UMC-11 was derived from a male individual. The described assay outputs absolute copy number, outputs an error estimate (p-value), and is more accurate than array-based platforms at high copy number. The platform enables profiling of mitochondrial levels and telomeric length. The assay is lab-automatable and has a genomic resolution and cost that are tunable based on the number of sequence reads.

  7. DNA copy number, including telomeres and mitochondria, assayed using next-generation sequencing

    PubMed Central

    2010-01-01

    Background DNA copy number variations occur within populations and aberrations can cause disease. We sought to develop an improved lab-automatable, cost-efficient, accurate platform to profile DNA copy number. Results We developed a sequencing-based assay of nuclear, mitochondrial, and telomeric DNA copy number that draws on the unbiased nature of next-generation sequencing and incorporates techniques developed for RNA expression profiling. To demonstrate this platform, we assayed UMC-11 cells using 5 million 33 nt reads and found tremendous copy number variation, including regions of single and homogeneous deletions and amplifications to 29 copies; 5 times more mitochondria and 4 times less telomeric sequence than a pool of non-diseased, blood-derived DNA; and that UMC-11 was derived from a male individual. Conclusion The described assay outputs absolute copy number, outputs an error estimate (p-value), and is more accurate than array-based platforms at high copy number. The platform enables profiling of mitochondrial levels and telomeric length. The assay is lab-automatable and has a genomic resolution and cost that are tunable based on the number of sequence reads. PMID:20398377

  8. Pauling's electronegativity equation and a new corollary accurately predict bond dissociation enthalpies and enhance current understanding of the nature of the chemical bond.

    PubMed

    Matsunaga, Nikita; Rogers, Donald W; Zavitsas, Andreas A

    2003-04-18

    Contrary to other recent reports, Pauling's original electronegativity equation, applied as Pauling specified, describes quite accurately homolytic bond dissociation enthalpies of common covalent bonds, including highly polar ones, with an average deviation of +/-1.5 kcal mol(-1) from literature values for 117 such bonds. Dissociation enthalpies are presented for more than 250 bonds, including 79 for which experimental values are not available. Some previous evaluations of accuracy gave misleadingly poor results by applying the equation to cases for which it was not derived and for which it should not reproduce experimental values. Properly interpreted, the results of the equation provide new and quantitative insights into many facets of chemistry such as radical stabilities, factors influencing reactivity in electrophilic aromatic substitutions, the magnitude of steric effects, conjugative stabilization in unsaturated systems, rotational barriers, molecular and electronic structure, and aspects of autoxidation. A new corollary of the original equation expands its applicability and provides a rationale for previously observed empirical correlations. The equation raises doubts about a new bonding theory. Hydrogen is unique in that its electronegativity is not constant.

  9. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  10. Modeling Disease Severity in Multiple Sclerosis Using Electronic Health Records

    PubMed Central

    Xia, Zongqi; Secor, Elizabeth; Chibnik, Lori B.; Bove, Riley M.; Cheng, Suchun; Chitnis, Tanuja; Cagan, Andrew; Gainer, Vivian S.; Chen, Pei J.; Liao, Katherine P.; Shaw, Stanley Y.; Ananthakrishnan, Ashwin N.; Szolovits, Peter; Weiner, Howard L.; Karlson, Elizabeth W.; Murphy, Shawn N.; Savova, Guergana K.; Cai, Tianxi; Churchill, Susanne E.; Plenge, Robert M.; Kohane, Isaac S.; De Jager, Philip L.

    2013-01-01

    Objective To optimally leverage the scalability and unique features of the electronic health records (EHR) for research that would ultimately improve patient care, we need to accurately identify patients and extract clinically meaningful measures. Using multiple sclerosis (MS) as a proof of principle, we showcased how to leverage routinely collected EHR data to identify patients with a complex neurological disorder and derive an important surrogate measure of disease severity heretofore only available in research settings. Methods In a cross-sectional observational study, 5,495 MS patients were identified from the EHR systems of two major referral hospitals using an algorithm that includes codified and narrative information extracted using natural language processing. In the subset of patients who receive neurological care at a MS Center where disease measures have been collected, we used routinely collected EHR data to extract two aggregate indicators of MS severity of clinical relevance multiple sclerosis severity score (MSSS) and brain parenchymal fraction (BPF, a measure of whole brain volume). Results The EHR algorithm that identifies MS patients has an area under the curve of 0.958, 83% sensitivity, 92% positive predictive value, and 89% negative predictive value when a 95% specificity threshold is used. The correlation between EHR-derived and true MSSS has a mean R2 = 0.38±0.05, and that between EHR-derived and true BPF has a mean R2 = 0.22±0.08. To illustrate its clinical relevance, derived MSSS captures the expected difference in disease severity between relapsing-remitting and progressive MS patients after adjusting for sex, age of symptom onset and disease duration (p = 1.56×10−12). Conclusion Incorporation of sophisticated codified and narrative EHR data accurately identifies MS patients and provides estimation of a well-accepted indicator of MS severity that is widely used in research settings but not part of the routine medical records. Similar approaches could be applied to other complex neurological disorders. PMID:24244385

  11. Planetary boundary layer height from CALIOP compared to radiosonde over China

    NASA Astrophysics Data System (ADS)

    Zhang, Wanchun; Guo, Jianping; Miao, Yucong; Liu, Huan; Zhang, Yong; Li, Zhengqiang; Zhai, Panmao

    2016-08-01

    Accurate estimation of planetary boundary layer height (PBLH) is key to air quality prediction, weather forecast, and assessment of regional climate change. The PBLH retrieval from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) is expected to complement ground-based measurements due to the broad spatial coverage of satellites. In this study, CALIOP PBLHs are derived from combination of Haar wavelet and maximum variance techniques, and are further validated against PBLHs estimated from ground-based lidar at Beijing and Jinhua. Correlation coefficients between PBLHs from ground- and satellite-based lidars are 0.59 at Beijing and 0.65 at Jinhua. Also, the PBLH climatology from CALIOP and radiosonde are compiled over China during the period from 2011 to 2014. Maximum CALIOP-derived PBLH can be seen in summer as compared to lower values in other seasons. Three matchup scenarios are proposed according to the position of each radiosonde site relative to its closest CALIPSO ground tracks. For each scenario, intercomparisons were performed between CALIOP- and radiosonde-derived PBLHs, and scenario 2 is found to be better than other scenarios using difference as the criteria. In early summer afternoon over 70 % of the total radiosonde sites have PBLH values ranging from 1.6 to 2.0 km. Overall, CALIOP-derived PBLHs are well consistent with radiosonde-derived PBLHs. To our knowledge, this study is the first intercomparison of PBLH on a large scale using the radiosonde network of China, shedding important light on the data quality of initial CALIOP-derived PBLH results.

  12. Protocol to determine accurate absorption coefficients for iron containing transferrins

    PubMed Central

    James, Nicholas G.; Mason, Anne B.

    2008-01-01

    An accurate protein concentration is an essential component of most biochemical experiments. The simplest method to determine a protein concentration is by measuring the A280, using an absorption coefficient (ε), and applying the Beer-Lambert law. For some metalloproteins (including all transferrin family members) difficulties arise because metal binding contributes to the A280 in a non-linear manner. The Edelhoch method is based on the assumption that the ε of a denatured protein in 6 M guanidine-HCl can be calculated from the number of the tryptophan, tyrosine, and cystine residues. We extend this method to derive ε values for both apo- and iron-bound transferrins. The absorbance of an identical amount of iron containing protein is measured in: 1) 6 M guanidine-HCl (denatured, no iron); 2) pH 7.4 buffer (non-denatured with iron); and 3) pH 5.6 (or lower) buffer with a chelator (non-denatured without iron). Since the iron free apo-protein has an identical A280 under non-denaturing conditions, the difference between the reading at pH 7.4 and the lower pH directly reports the contribution of the iron. The method is fast and consumes ~1 mg of sample. The ability to determine accurate ε values for transferrin mutants that bind iron with a wide range of affinities has proven very useful; furthermore a similar approach could easily be followed to determine ε values for other metalloproteins in which metal binding contributes to the A280. PMID:18471984

  13. (abstract) A Comparison Between Measurements of the F-layer Critical Frequency and Values Derived from the PRISM Adjustment Algorithm Applied to Total Electron Content Data in the Equatorial Region

    NASA Technical Reports Server (NTRS)

    Mannucci, A. J.; Anderson, D. N.; Abdu, A. M.

    1994-01-01

    The Parametrized Real-Time Ionosphere Specification Model (PRISM) is a global ionospheric specification model that can incorporate real-time data to compute accurate electron density profiles. Time series of computed and measured data are compared in this paper. This comparison can be used to suggest methods of optimizing the PRISM adjustment algorithm for TEC data obtained at low altitudes.

  14. The recovery of microwave scattering parameters from scatterometric measurements with special application to the sea

    NASA Technical Reports Server (NTRS)

    Claassen, J. P.; Fung, A. K.

    1975-01-01

    As part of an effort to demonstrate the value of the microwave scatterometer as a remote sea wind sensor, the interaction between an arbitrarily polarized scatterometer antenna and a noncoherent distributive target was derived and applied to develop a measuring technique to recover all the scattering parameters. The results are helpful for specifying antenna polarization properties for accurate retrieval of the parameters not only for the sea but also for other distributive scenes.

  15. Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder.

    PubMed

    Schnyer, David M; Clasen, Peter C; Gonzalez, Christopher; Beevers, Christopher G

    2017-06-30

    Using MRI to diagnose mental disorders has been a long-term goal. Despite this, the vast majority of prior neuroimaging work has been descriptive rather than predictive. The current study applies support vector machine (SVM) learning to MRI measures of brain white matter to classify adults with Major Depressive Disorder (MDD) and healthy controls. In a precisely matched group of individuals with MDD (n =25) and healthy controls (n =25), SVM learning accurately (74%) classified patients and controls across a brain map of white matter fractional anisotropy values (FA). The study revealed three main findings: 1) SVM applied to DTI derived FA maps can accurately classify MDD vs. healthy controls; 2) prediction is strongest when only right hemisphere white matter is examined; and 3) removing FA values from a region identified by univariate contrast as significantly different between MDD and healthy controls does not change the SVM accuracy. These results indicate that SVM learning applied to neuroimaging data can classify the presence versus absence of MDD and that predictive information is distributed across brain networks rather than being highly localized. Finally, MDD group differences revealed through typical univariate contrasts do not necessarily reveal patterns that provide accurate predictive information. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  16. Accurate assessment and identification of naturally occurring cellular cobalamins.

    PubMed

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V; Moreira, Edward S; Brasch, Nicola E; Jacobsen, Donald W

    2008-01-01

    Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo beta-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Experiments were designed to: 1) assess beta-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable beta-axial ligands. The cobalamin profile of cells grown in the presence of [ 57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [ 57Co]-aquacobalamin, [ 57Co]-glutathionylcobalamin, [ 57Co]-sulfitocobalamin, [ 57Co]-cyanocobalamin, [ 57Co]-adenosylcobalamin, [ 57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalaminacting as a scavenger cobalamin (i.e. "cold trapping"), the recovery of both [ 57Co]-glutathionylcobalamin and [ 57Co]-sulfitocobalamin decreases to low but consistent levels. In contrasts, the [ 57Co]-nitrocobalamin observed in the extracts prepared without excess aquacobalamin is undetected in extracts prepared with cold trapping. This demonstrates that beta-ligand exchange occur with non-covalently bound beta-ligands. The exception to this observation is cyanocobalamin with a non-exchangeable CN- group. It is now possible to obtain accurate profiles of cellular cobalamin.

  17. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  18. Clinical pitfalls in the diagnosis of ataque de nervios: a case study.

    PubMed

    Lizardi, Dana; Oquendo, Maria A; Graver, Ruth

    2009-09-01

    Ataque de nervios (attack of nerves) is an idiom of distress generally thought of in relation to Caribbean Hispanics. The following case study discusses the presentation of ataque de nervios in a Colombian female. This case study provides insight into a different presentation of ataque de nervios in a new population that clinicians should be aware of in order to ensure accurate diagnosis. Ataque de nervios is a distinct syndrome that does not fully correspond with any single DSM-IV diagnosis. However, there is overlap between symptoms in this condition and those in conventional clinical diagnoses. Common problems in deriving an accurate differential diagnosis are discussed. Implications for treatment are also reviewed, with an emphasis on a comprehensive approach to treatment that supports the client's norms and values.

  19. How can activity-based costing methodology be performed as a powerful tool to calculate costs and secure appropriate patient care?

    PubMed

    Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong

    2007-04-01

    Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.

  20. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation.

    PubMed

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  1. Derivation of WECC Distributed PV System Model Parameters from Quasi-Static Time-Series Distribution System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Boemer, Jens C.; Vittal, Eknath

    The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less

  2. Comparison of constitutive flow resistance equations based on the Manning and Chezy equations applied to natural rivers

    USGS Publications Warehouse

    Bjerklie, David M.; Dingman, S. Lawrence; Bolster, Carl H.

    2005-01-01

    A set of conceptually derived in‐bank river discharge–estimating equations (models), based on the Manning and Chezy equations, are calibrated and validated using a database of 1037 discharge measurements in 103 rivers in the United States and New Zealand. The models are compared to a multiple regression model derived from the same data. The comparison demonstrates that in natural rivers, using an exponent on the slope variable of 0.33 rather than the traditional value of 0.5 reduces the variance associated with estimating flow resistance. Mean model uncertainty, assuming a constant value for the conductance coefficient, is less than 5% for a large number of estimates, and 67% of the estimates would be accurate within 50%. The models have potential application where site‐specific flow resistance information is not available and can be the basis for (1) a general approach to estimating discharge from remotely sensed hydraulic data, (2) comparison to slope‐area discharge estimates, and (3) large‐scale river modeling.

  3. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  4. Derivation of Improved Surface and TOA Broadband Fluxes Using CERES-derived Narrowband-to-Broadband Coefficients

    NASA Technical Reports Server (NTRS)

    Khaiyer, Mandana M.; Doelling, David R.; Chan, Pui K.; Nordeen, MIchele L.; Palikonda, Rabindra; Yi, Yuhong; Minnis, Patrick

    2006-01-01

    Satellites can provide global coverage of a number of climatically important radiative parameters, including broadband (BB) shortwave (SW) and longwave (LW) fluxes at the top of the atmosphere (TOA) and surface. These parameters can be estimated from narrowband (NB) Geostationary Operational Environmental Satellite (GOES) data, but their accuracy is highly dependent on the validity of the narrowband-to-broadband (NB-BB) conversion formulas that are used to convert the NB fluxes to broadband values. The formula coefficients have historically been derived by regressing matched polarorbiting satellite BB fluxes or radiances with their NB counterparts from GOES (e.g., Minnis et al., 1984). More recently, the coefficients have been based on matched Earth Radiation Budget Experiment (ERBE) and GOES-6 data (Minnis and Smith, 1998). The Clouds and the Earth's Radiant Energy Budget (CERES see Wielicki et al. 1998)) project has recently developed much improved Angular Distribution Models (ADM; Loeb et al., 2003) and has higher resolution data compared to ERBE. A limited set of coefficients was also derived from matched GOES-8 and CERES data taken on Topical Rainfall Measuring Mission (TRMM) satellite (Chakrapani et al., 2003; Doelling et al., 2003). The NB-BB coefficients derived from CERES and the GOES suite should yield more accurate BB fluxes than from ERBE, but are limited spatially and seasonally. With CERES data taken from Terra and Aqua, it is now possible to derive more reliable NB-BB coefficients for any given area. Better TOA fluxes should translate to improved surface radiation fluxes derived using various algorithms. As part of an ongoing effort to provide accurate BB flux estimates for the Atmospheric Radiation Measurement (ARM) Program, this paper documents the derivation of new NB-BB coefficients for the ARM Southern Great Plains (SGP) domain and for the Darwin region of the Tropical Western Pacific (DTWP) domain.

  5. SU(N ) fermions in a one-dimensional harmonic trap

    NASA Astrophysics Data System (ADS)

    Laird, E. K.; Shi, Z.-Y.; Parish, M. M.; Levinsen, J.

    2017-09-01

    We conduct a theoretical study of SU (N ) fermions confined by a one-dimensional harmonic potential. First, we introduce a numerical approach for solving the trapped interacting few-body problem, by which one may obtain accurate energy spectra across the full range of interaction strengths. In the strong-coupling limit, we map the SU (N ) Hamiltonian to a spin-chain model. We then show that an existing, extremely accurate ansatz—derived for a Heisenberg SU(2) spin chain—is extendable to these N -component systems. Lastly, we consider balanced SU (N ) Fermi gases that have an equal number of particles in each spin state for N =2 ,3 ,4 . In the weak- and strong-coupling regimes, we find that the ground-state energies rapidly converge to their expected values in the thermodynamic limit with increasing atom number. This suggests that the many-body energetics of N -component fermions may be accurately inferred from the corresponding few-body systems of N distinguishable particles.

  6. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  7. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations.

    PubMed

    De Vries, Rowen J; Marsh, Steven

    2015-11-08

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2-14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997 ± 0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs.

  8. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations

    PubMed Central

    Marsh, Steven

    2015-01-01

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2 mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2–14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997±0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs. PACS numbers: 87.53.Bn, 87.55.K‐, 87.56.bd PMID:26699566

  9. Correlation of RNA secondary structure statistics with thermodynamic stability and applications to folding.

    PubMed

    Wu, Johnny C; Gardner, David P; Ozer, Stuart; Gutell, Robin R; Ren, Pengyu

    2009-08-28

    The accurate prediction of the secondary and tertiary structure of an RNA with different folding algorithms is dependent on several factors, including the energy functions. However, an RNA higher-order structure cannot be predicted accurately from its sequence based on a limited set of energy parameters. The inter- and intramolecular forces between this RNA and other small molecules and macromolecules, in addition to other factors in the cell such as pH, ionic strength, and temperature, influence the complex dynamics associated with transition of a single stranded RNA to its secondary and tertiary structure. Since all of the factors that affect the formation of an RNAs 3D structure cannot be determined experimentally, statistically derived potential energy has been used in the prediction of protein structure. In the current work, we evaluate the statistical free energy of various secondary structure motifs, including base-pair stacks, hairpin loops, and internal loops, using their statistical frequency obtained from the comparative analysis of more than 50,000 RNA sequences stored in the RNA Comparative Analysis Database (rCAD) at the Comparative RNA Web (CRW) Site. Statistical energy was computed from the structural statistics for several datasets. While the statistical energy for a base-pair stack correlates with experimentally derived free energy values, suggesting a Boltzmann-like distribution, variation is observed between different molecules and their location on the phylogenetic tree of life. Our statistical energy values calculated for several structural elements were utilized in the Mfold RNA-folding algorithm. The combined statistical energy values for base-pair stacks, hairpins and internal loop flanks result in a significant improvement in the accuracy of secondary structure prediction; the hairpin flanks contribute the most.

  10. A Study of Global Cirrus Cloud Morphology with AIRS Cloud-clear Radiances (CCRs)

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Gong, Jie

    2012-01-01

    Version 6 (V6) AIRS cloud-clear radiances (CCR) are used to derive cloud-induced radiance (Tcir=Tb-CCR) at the infrared frequencies of weighting functions peaked in the middle troposphere. The significantly improved V 6 CCR product allows a more accurate estimation of the expected clear-sky radiance as if clouds are absent. In the case where strong cloud scattering is present, the CCR becomes unreliable, which is reflected by its estimated uncertainty, and interpolation is employed to replace this CCR value. We find that Tcir derived from this CCR method are much better than other methods and detect more clouds in the upper and lower troposphere as well as in the polar regions where cloud detection is particularly challenging. The cloud morphology derived from the V6 test month, as well as some artifacts, will be shown.

  11. Study protocol: combining experimental methods, econometrics and simulation modelling to determine price elasticities for studying food taxes and subsidies (The Price ExaM Study).

    PubMed

    Waterlander, Wilma E; Blakely, Tony; Nghiem, Nhung; Cleghorn, Christine L; Eyles, Helen; Genc, Murat; Wilson, Nick; Jiang, Yannan; Swinburn, Boyd; Jacobi, Liana; Michie, Jo; Ni Mhurchu, Cliona

    2016-07-19

    There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price) to better inform policy on health-related food taxes and subsidies. The Price Experiment and Modelling (Price ExaM) study aims to: I) derive accurate and precise food PE values; II) quantify the impact of price changes on quantity and quality of discrete food group purchases and; III) model the potential health and disease impacts of a range of food taxes and subsidies. To achieve this, we will use a novel method that includes a randomised Virtual Supermarket experiment and econometric methods. Findings will be applied in simulation models to estimate population health impact (quality-adjusted life-years [QALYs]) using a multi-state life-table model. The study will consist of four sequential steps: 1. We generate 5000 price sets with random price variation for all 1412 Virtual Supermarket food and beverage products. Then we add systematic price variation for foods to simulate five taxes and subsidies: a fruit and vegetable subsidy and taxes on sugar, saturated fat, salt, and sugar-sweetened beverages. 2. Using an experimental design, 1000 adult New Zealand shoppers complete five household grocery shops in the Virtual Supermarket where they are randomly assigned to one of the 5000 price sets each time. 3. Output data (i.e., multiple observations of price configurations and purchased amounts) are used as inputs to econometric models (using Bayesian methods) to estimate accurate PE values. 4. A disease simulation model will be run with the new PE values as inputs to estimate QALYs gained and health costs saved for the five policy interventions. The Price ExaM study has the potential to enhance public health and economic disciplines by introducing internationally novel scientific methods to estimate accurate and precise food PE values. These values will be used to model the potential health and disease impacts of various food pricing policy options. Findings will inform policy on health-related food taxes and subsidies. Australian New Zealand Clinical Trials Registry ACTRN12616000122459 (registered 3 February 2016).

  12. The methane absorption spectrum near 1.73 μm (5695-5850 cm-1): Empirical line lists at 80 K and 296 K and rovibrational assignments

    NASA Astrophysics Data System (ADS)

    Ghysels, M.; Mondelain, D.; Kassi, S.; Nikitin, A. V.; Rey, M.; Campargue, A.

    2018-07-01

    The methane absorption spectrum is studied at 297 K and 80 K in the center of the Tetradecad between 5695 and 5850 cm-1. The spectra are recorded by differential absorption spectroscopy (DAS) with a noise equivalent absorption of about αmin≈ 1.5 × 10-7 cm-1. Two empirical line lists are constructed including about 4000 and 2300 lines at 297 K and 80 K, respectively. Lines due to 13CH4 present in natural abundance were identified by comparison with a spectrum of pure 13CH4 recorded in the same temperature conditions. About 1700 empirical values of the lower state energy level, Eemp, were derived from the ratios of the line intensities at 80 K and 296 K. They provide accurate temperature dependence for most of the absorption in the region (93% and 82% at 80 K and 296 K, respectively). The quality of the derived empirical values is illustrated by the clear propensity of the corresponding lower state rotational quantum number, Jemp, to be close to integer values. Using an effective Hamiltonian model derived from a previously published ab initio potential energy surface, about 2060 lines are rovibrationnally assigned, adding about 1660 new assignments to those provided in the HITRAN database for 12CH4 in the region.

  13. The Emergent Capabilities of Distributed Satellites and Methods for Selecting Distributed Satellite Science Missions

    NASA Astrophysics Data System (ADS)

    Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.

    2017-12-01

    Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was gained that would be impossible or unsatisfactory with monolithic systems and how changes in design and context variables affected the overall mission value. Each study serves as a blueprint for how to conduct a Pre-Phase A study using these methods to learn more about the tradespace of a particular mission.

  14. a Protocol for High-Accuracy Theoretical Thermochemistry

    NASA Astrophysics Data System (ADS)

    Welch, Bradley; Dawes, Richard

    2017-06-01

    Theoretical studies of spectroscopy and reaction dynamics including the necessary development of potential energy surfaces rely on accurate thermochemical information. The Active Thermochemical Tables (ATcT) approach by Ruscic^{1} incorporates data for a large number of chemical species from a variety of sources (both experimental and theoretical) and derives a self-consistent network capable of making extremely accurate estimates of quantities such as temperature dependent enthalpies of formation. The network provides rigorous uncertainties, and since the values don't rely on a single measurement or calculation, the provenance of each quantity is also obtained. To expand and improve the network it is desirable to have a reliable protocol such as the HEAT approach^{2} for calculating accurate theoretical data. Here we present and benchmark an approach based on explicitly-correlated coupled-cluster theory and vibrational perturbation theory (VPT2). Methyldioxy and Methyl Hydroperoxide are important and well-characterized species in combustion processes and begin the family of (ethyl-, propyl-based, etc) similar compounds (much less is known about the larger members). Accurate anharmonic frequencies are essential to accurately describe even the 0 K enthalpies of formation, but are especially important for finite temperature studies. Here we benchmark the spectroscopic and thermochemical accuracy of the approach, comparing with available data for the smallest systems, and comment on the outlook for larger systems that are less well-known and characterized. ^{1}B. Ruscic, Active Thermochemical Tables (ATcT) values based on ver. 1.118 of the Thermochemical Network (2015); available at ATcT.anl.gov ^{2}A. Tajti, P. G. Szalay, A. G. Császár, M. Kállay, J. Gauss, E. F. Valeev, B. A. Flowers, J. Vázquez, and J. F. Stanton. JCP 121, (2004): 11599.

  15. Maximizing the accuracy of field-derived numeric nutrient criteria in water quality regulations.

    PubMed

    McLaughlin, Douglas B

    2014-01-01

    High levels of the nutrients nitrogen and phosphorus can cause unhealthy biological or ecological conditions in surface waters and prevent the attainment of their designated uses. Regulatory agencies are developing numeric criteria for these nutrients in an effort to ensure that the surface waters in their jurisdictions remain healthy and productive, and that water quality standards are met. These criteria are often derived using field measurements that relate nutrient concentrations and other water quality conditions to expected biological responses such as undesirable growth or changes in aquatic plant and animal communities. Ideally, these numeric criteria can be used to accurately "diagnose" ecosystem health and guide management decisions. However, the degree to which numeric nutrient criteria are useful for decision making depends on how accurately they reflect the status or risk of nutrient-related biological impairments. Numeric criteria that have little predictive value are not likely to be useful for managing nutrient concerns. This paper presents information on the role of numeric nutrient criteria as biological health indicators, and the potential benefits of sufficiently accurate criteria for nutrient management. In addition, it describes approaches being proposed or adopted in states such as Florida and Maine to improve the accuracy of numeric criteria and criteria-based decisions. This includes a preference for developing site-specific criteria in cases where sufficient data are available, and the use of nutrient concentration and biological response criteria together in a framework to support designated use attainment decisions. Together with systematic planning during criteria development, the accuracy of field-derived numeric nutrient criteria can be assessed and maximized as a part of an overall effort to manage nutrient water quality concerns. © 2013 SETAC.

  16. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  17. Comparison of Self-Report Versus Sensor-Based Methods for Measuring the Amount of Upper Limb Activity Outside the Clinic.

    PubMed

    Waddell, Kimberly J; Lang, Catherine E

    2018-03-10

    To compare self-reported with sensor-measured upper limb (UL) performance in daily life for individuals with chronic (≥6mo) UL paresis poststroke. Secondary analysis of participants enrolled in a phase II randomized, parallel, dose-response UL movement trial. This analysis compared the accuracy and consistency between self-reported UL performance and sensor-measured UL performance at baseline and immediately post an 8-week intensive UL task-specific intervention. Outpatient rehabilitation. Community-dwelling individuals with chronic (≥6mo) UL paresis poststroke (N=64). Not applicable. Motor Activity Log amount of use scale and the sensor-derived use ratio from wrist-worn accelerometers. There was a high degree of variability between self-reported UL performance and the sensor-derived use ratio. Using sensor-based values as a reference, 3 distinct categories were identified: accurate reporters (reporting difference ±0.1), overreporters (difference >0.1), and underreporters (difference <-0.1). Five of 64 participants accurately self-reported UL performance at baseline and postintervention. Over half of participants (52%) switched categories from pre-to postintervention (eg, moved from underreporting preintervention to overreporting postintervention). For the consistent reporters, no participant characteristics were found to influence whether someone over- or underreported performance compared with sensor-based assessment. Participants did not consistently or accurately self-report UL performance when compared with the sensor-derived use ratio. Although self-report and sensor-based assessments are moderately associated and appear similar conceptually, these results suggest self-reported UL performance is often not consistent with sensor-measured performance and the measures cannot be used interchangeably. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. Millimeter wave spectra of carbonyl cyanide ⋆

    PubMed Central

    Bteich, S.B.; Tercero, B.; Cernicharo, J.; Motiyenko, R.A.; Margulès, L.; Guillemin, J.-C.

    2016-01-01

    Context More than 30 cyanide derivatives of simple organic molecules have been detected in the interstellar medium, but only one dicarbonitrile has been found and that very recently. There is still a lack of high-resolution spectroscopic data particularly for dinitriles derivatives. The carbonyl cyanide molecule is a new and interesting candidate for astrophysical detection. It could be formed by the reaction of CO and CN radicals, or by substitution of the hydrogen atom by a cyano group in cyanoformaldehyde, HC(=O)CN, that has already been detected in the interstellar medium. Aims The available data on the rotational spectrum of carbonyl cyanide is limited in terms of quantum number values and frequency range, and does not allow accurate extrapolation of the spectrum into the millimeter-wave range. To provide a firm basis for astrophysical detection of carbonyl cyanide we studied its millimeter-wave spectrum. Methods The rotational spectrum of carbonyl cyanide was measured in the frequency range 152 - 308 GHz and analyzed using Watson’s A- and S-reduction Hamiltonians. Results The ground and first excited state of v5 vibrational mode were assigned and analyzed. More than 1100 distinct frequency lines of the ground state were fitted to produce an accurate set of rotational and centrifugal distortion constants up to the eighth order. The frequency predictions based on these constants should be accurate enough for astrophysical searches in the frequency range up to 500 GHz and for transition involving energy levels with J ≤ 100 and Ka ≤ 42. Based on the results we searched for interstellar carbonyl cyanide in available observational data without success. Thus, we derived upper limits to its column density in different sources. PMID:27738349

  19. Estimation of Filling and Afterload Conditions by Pump Intrinsic Parameters in a Pulsatile Total Artificial Heart.

    PubMed

    Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich

    2016-07-01

    A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  20. Use of a SPAD-502 meter to measure leaf chlorophyll concentration in Arabidopsis thaliana.

    PubMed

    Ling, Qihua; Huang, Weihua; Jarvis, Paul

    2011-02-01

    The SPAD-502 meter is a hand-held device that is widely used for the rapid, accurate and non-destructive measurement of leaf chlorophyll concentrations. It has been employed extensively in both research and agricultural applications, with a range of different plant species. However, its utility has not been fully exploited in relation to the most intensively studied model organism for plant science research, Arabidopsis thaliana. Measurements with the SPAD-502 meter produce relative SPAD meter values that are proportional to the amount of chlorophyll present in the leaf. In order to convert these values into absolute units of chlorophyll concentration, calibration curves must be derived and utilized. Here, we present calibration equations for Arabidopsis that can be used to convert SPAD values into total chlorophyll per unit leaf area (nmol/cm(2); R(2) = 0.9960) or per unit fresh weight of leaf tissue (nmol/mg; R(2) = 0.9809). These relationships were derived using a series of Arabidopsis chloroplast biogenesis mutants that exhibit chlorophyll deficiencies of varying severity, and were verified by the subsequent analysis of senescent or light-stressed leaves. Our results revealed that the converted SPAD values differ from photometric measurements of solvent-extracted chlorophyll by just ~6% on average.

  1. Direct Measurements of Quantum Kinetic Energy Tensor in Stable and Metastable Water near the Triple Point: An Experimental Benchmark.

    PubMed

    Andreani, Carla; Romanelli, Giovanni; Senesi, Roberto

    2016-06-16

    This study presents the first direct and quantitative measurement of the nuclear momentum distribution anisotropy and the quantum kinetic energy tensor in stable and metastable (supercooled) water near its triple point, using deep inelastic neutron scattering (DINS). From the experimental spectra, accurate line shapes of the hydrogen momentum distributions are derived using an anisotropic Gaussian and a model-independent framework. The experimental results, benchmarked with those obtained for the solid phase, provide the state of the art directional values of the hydrogen mean kinetic energy in metastable water. The determinations of the direction kinetic energies in the supercooled phase, provide accurate and quantitative measurements of these dynamical observables in metastable and stable phases, that is, key insight in the physical mechanisms of the hydrogen quantum state in both disordered and polycrystalline systems. The remarkable findings of this study establish novel insight into further expand the capacity and accuracy of DINS investigations of the nuclear quantum effects in water and represent reference experimental values for theoretical investigations.

  2. Time-domain prefilter design for enhanced tracking and vibration suppression in machine motion control

    NASA Astrophysics Data System (ADS)

    Cole, Matthew O. T.; Shinonawanik, Praween; Wongratanaphisan, Theeraphong

    2018-05-01

    Structural flexibility can impact negatively on machine motion control systems by causing unmeasured positioning errors and vibration at locations where accurate motion is important for task execution. To compensate for these effects, command signal prefiltering may be applied. In this paper, a new FIR prefilter design method is described that combines finite-time vibration cancellation with dynamic compensation properties. The time-domain formulation exploits the relation between tracking error and the moment values of the prefilter impulse response function. Optimal design solutions for filters having minimum H2 norm are derived and evaluated. The control approach does not require additional actuation or sensing and can be effective even without complete and accurate models of the machine dynamics. Results from implementation and testing on an experimental high-speed manipulator having a Delta robot architecture with directionally compliant end-effector are presented. The results show the importance of prefilter moment values for tracking performance and confirm that the proposed method can achieve significant reductions in both peak and RMS tracking error, as well as settling time, for complex motion patterns.

  3. Thermodynamic Properties of Nitrogen Including Liquid and Vapor Phases from 63K to 2000K with Pressures to 10,000 Bar

    NASA Technical Reports Server (NTRS)

    Jacobsen, Richard T.; Stewart, Richard B.

    1973-01-01

    Tables of thermodynamic properties of nitrogen are presented for the liquid and vapor phases for temperatures from the freezing line to 2000K and pressures to 10,000 bar. The tables include values of density, internal energy, enthalpy, entropy, isochoric heat capacity, isobaric heat capacity velocity of sound, the isotherm derivative, and the isochor derivative. The thermodynamic property tables are based on an equation of state, P=P (p,T), which accurately represents liquid and gaseous nitrogen for the range of pressures and temperatures covered by the tables. Comparisons of property values calculated from the equation of state with measured values for P-p-T, heat capacity, enthalpy, latent heat, and velocity of sound are included to illustrate the agreement between the experimental data and the tables of properties presented here. The coefficients of the equation of state were determined by a weighted least squares fit to selected P-p-T data and, simultaneously, to isochoric heat capacity data determined by corresponding states analysis from oxygen data, and to data which define the phase equilibrium criteria for the saturated liquid and the saturated vapor. The vapor pressure equation, melting curve equation, and an equation to represent the ideal gas heat capacity are also presented. Estimates of the accuracy of the equation of state, the vapor pressure equation, and the ideal gas heat capacity equation are given. The equation of state, derivatives of the equation, and the integral functions for calculating derived thermodynamic properties are included.

  4. Accurate Measurement of the Optical Constants n and k for a Series of 57 Inorganic and Organic Liquids for Optical Modeling and Detection.

    PubMed

    Myers, Tanya L; Tonkyn, Russell G; Danby, Tyler O; Taubman, Matthew S; Bernacki, Bruce E; Birnbaum, Jerome C; Sharpe, Steven W; Johnson, Timothy J

    2018-04-01

    For optical modeling and other purposes, we have created a library of 57 liquids for which we have measured the complex optical constants n and k. These liquids vary in their nature, ranging in properties that include chemical structure, optical band strength, volatility, and viscosity. By obtaining the optical constants, one can model most optical phenomena in media and at interfaces including reflection, refraction, and dispersion. Based on the works of others, we have developed improved protocols using multiple path lengths to determine the optical constants n/k for dozens of liquids, including inorganic, organic, and organophosphorus compounds. Detailed descriptions of the measurement and data reduction protocols are discussed; agreement of the derived optical constant n and k values with literature values are presented. We also present results using the n/k values as applied to an optical modeling scenario whereby the derived data are presented and tested for models of 1 µm and 100 µm layers for dimethyl methylphosphonate (DMMP) on both metal (aluminum) and dielectric (soda lime glass) substrates to show substantial differences between the reflected signal from highly reflective substrates and less-reflective substrates.

  5. Photolysis Rate Coefficient Calculations in Support of SOLVE Campaign

    NASA Technical Reports Server (NTRS)

    Lloyd, Steven A.; Swartz, William H.

    2001-01-01

    The objectives for this SOLVE project were 3-fold. First, we sought to calculate a complete set of photolysis rate coefficients (j-values) for the campaign along the ER-2 and DC-8 flight tracks. En route to this goal, it would be necessary to develop a comprehensive set of input geophysical conditions (e.g., ozone profiles), derived from various climatological, aircraft, and remotely sensed datasets, in order to model the radiative transfer of the atmosphere accurately. These j-values would then need validation by comparison with flux-derived j-value measurements. The second objective was to analyze chemistry along back trajectories using the NASA/Goddard chemistry trajectory model initialized with measurements of trace atmospheric constituents. This modeling effort would provide insight into the completeness of current measurements and the chemistry of Arctic wintertime ozone loss. Finally, we sought to coordinate stellar occultation measurements of ozone (and thus ozone loss) during SOLVE using the Midcourse Space Experiment(MSX)/Ultraviolet and Visible Imagers and Spectrographic Imagers (UVISI) satellite instrument. Such measurements would determine ozone loss during the Arctic polar night and represent the first significant science application of space-based stellar occultation in the Earth's atmosphere.

  6. Annual Greenland Accumulation Rates (2009-2012) from Airborne Snow Radar

    NASA Technical Reports Server (NTRS)

    Koenig, Lora S.; Ivanoff, Alvaro; Alexander, Patrick M.; MacGregor, Joseph A.; Fettweis, Xavier; Panzer, Ben; Paden, John D.; Forster, Richard R.; Das, Indrani; McConnell, Joseph R.; hide

    2016-01-01

    Contemporary climate warming over the Arctic is accelerating mass loss from the Greenland Ice Sheet through increasing surface melt, emphasizing the need to closely monitor its surface mass balance in order to improve sea-level rise predictions. Snow accumulation is the largest component of the ice sheet's surface mass balance, but in situ observations thereof are inherently sparse and models are difficult to evaluate at large scales. Here, we quantify recent Greenland accumulation rates using ultra-wideband (2-6.5 gigahertz) airborne snow radar data collected as part of NASA's Operation IceBridge between 2009 and 2012. We use a semi-automated method to trace the observed radiostratigraphy and then derive annual net accumulation rates for 2009-2012. The uncertainty in these radar-derived accumulation rates is on average 14 percent. A comparison of the radarderived accumulation rates and contemporaneous ice cores shows that snow radar captures both the annual and longterm mean accumulation rate accurately. A comparison with outputs from a regional climate model (MAR - Modele Atmospherique Regional for Greenland and vicinity) shows that this model matches radar-derived accumulation rates in the ice sheet interior but produces higher values over southeastern Greenland. Our results demonstrate that snow radar can efficiently and accurately map patterns of snow accumulation across an ice sheet and that it is valuable for evaluating the accuracy of surface mass balance models.

  7. New Asteroseismic Scaling Relations Based on the Hayashi Track Relation Applied to Red Giant Branch Stars in NGC 6791 and NGC 6819

    NASA Astrophysics Data System (ADS)

    Wu, T.; Li, Y.; Hekker, S.

    2014-01-01

    Stellar mass M, radius R, and gravity g are important basic parameters in stellar physics. Accurate values for these parameters can be obtained from the gravitational interaction between stars in multiple systems or from asteroseismology. Stars in a cluster are thought to be formed coevally from the same interstellar cloud of gas and dust. The cluster members are therefore expected to have some properties in common. These common properties strengthen our ability to constrain stellar models and asteroseismically derived M, R, and g when tested against an ensemble of cluster stars. Here we derive new scaling relations based on a relation for stars on the Hayashi track (\\sqrt{T_eff} \\sim g^pR^q) to determine the masses and metallicities of red giant branch stars in open clusters NGC 6791 and NGC 6819 from the global oscillation parameters Δν (the large frequency separation) and νmax (frequency of maximum oscillation power). The Δν and νmax values are derived from Kepler observations. From the analysis of these new relations we derive: (1) direct observational evidence that the masses of red giant branch stars in a cluster are the same within their uncertainties, (2) new methods to derive M and z of the cluster in a self-consistent way from Δν and νmax, with lower intrinsic uncertainties, and (3) the mass dependence in the Δν - νmax relation for red giant branch stars.

  8. Analysis of ground-measured and passive-microwave-derived snow depth variations in midwinter across the Northern Great Plains

    USGS Publications Warehouse

    Chang, A.T.C.; Kelly, R.E.J.; Josberger, E.G.; Armstrong, R.L.; Foster, J.L.; Mognard, N.M.

    2005-01-01

    Accurate estimation of snow mass is important for the characterization of the hydrological cycle at different space and time scales. For effective water resources management, accurate estimation of snow storage is needed. Conventionally, snow depth is measured at a point, and in order to monitor snow depth in a temporally and spatially comprehensive manner, optimum interpolation of the points is undertaken. Yet the spatial representation of point measurements at a basin or on a larger distance scale is uncertain. Spaceborne scanning sensors, which cover a wide swath and can provide rapid repeat global coverage, are ideally suited to augment the global snow information. Satellite-borne passive microwave sensors have been used to derive snow depth (SD) with some success. The uncertainties in point SD and areal SD of natural snowpacks need to be understood if comparisons are to be made between a point SD measurement and satellite SD. In this paper three issues are addressed relating satellite derivation of SD and ground measurements of SD in the northern Great Plains of the United States from 1988 to 1997. First, it is shown that in comparing samples of ground-measured point SD data with satellite-derived 25 ?? 25 km2 pixels of SD from the Defense Meteorological Satellite Program Special Sensor Microwave Imager, there are significant differences in yearly SD values even though the accumulated datasets showed similarities. Second, from variogram analysis, the spatial variability of SD from each dataset was comparable. Third, for a sampling grid cell domain of 1?? ?? 1?? in the study terrain, 10 distributed snow depth measurements per cell are required to produce a sampling error of 5 cm or better. This study has important implications for validating SD derivations from satellite microwave observations. ?? 2005 American Meteorological Society.

  9. pH-dependent equilibrium isotope fractionation associated with the compound specific nitrogen and carbon isotope analysis of substituted anilines by SPME-GC/IRMS.

    PubMed

    Skarpeli-Liati, Marita; Turgeon, Aurora; Garr, Ashley N; Arnold, William A; Cramer, Christopher J; Hofstetter, Thomas B

    2011-03-01

    Solid-phase microextraction (SPME) coupled to gas chromatography/isotope ratio mass spectrometry (GC/IRMS) was used to elucidate the effects of N-atom protonation on the analysis of N and C isotope signatures of selected aromatic amines. Precise and accurate isotope ratios were measured using polydimethylsiloxane/divinylbenzene (PDMS/DVB) as the SPME fiber material at solution pH-values that exceeded the pK(a) of the substituted aniline's conjugate acid by two pH-units. Deviations of δ(15)N and δ(13)C-values from reference measurements by elemental analyzer IRMS were small (<0.9‰) and within the typical uncertainties of isotope ratio measurements by SPME-GC/IRMS. Under these conditions, the detection limits for accurate isotope ratio measurements were between 0.64 and 2.1 mg L(-1) for δ(15)N and between 0.13 and 0.54 mg L(-1) for δ(13)C, respectively. Substantial inverse N isotope fractionation was observed by SPME-GC/IRMS as the fraction of protonated species increased with decreasing pH leading to deviations of -20‰ while the corresponding δ(13)C-values were largely invariant. From isotope ratio analysis at different solution pHs and theoretical calculations by density functional theory, we derived equilibrium isotope effects, EIEs, pertinent to aromatic amine protonation of 0.980 and 1.001 for N and C, respectively, which were very similar for all compounds investigated. Our work shows that N-atom protonation can compromise accurate compound-specific N isotope analysis of aromatic amines.

  10. Atomic Physics with the Goddard High Resolution Spectrograph on the Hubble Space Telescope. III; Oscillator Strengths for Neutral Carbon

    NASA Technical Reports Server (NTRS)

    Zsargo, J.; Federman, S. R.; Cardelli, Jason A.

    1997-01-01

    High quality spectra of interstellar absorption from C I toward beta(sup 1) S(sub co), rho O(sub ph) A, and chi O(sub ph) were obtained with the Goddard High Resolution Spectrograph on HST. Many weak lines were detected within the observed wavelength intervals: 1150-1200 A for beta(sup 1) S(sub co) and 1250-1290 A for rho O(sub ph) A and chi O(sub ph). Curve-of-growth analyses were performed in order to extract accurate column densities and Doppler parameters from lines with precise laboratory-based f-values. These column densities and b-values were used to obtain a self-consistent set of f-values for all the observed C I lines. A particularly important constraint was the need to reproduce data for more than one line of sight. For about 50% of the lines, the derived f-values differ appreciably from the values quoted by Morton.

  11. Marine Geoid Undulation Assessment Over South China Sea Using Global Geopotential Models and Airborne Gravity Data

    NASA Astrophysics Data System (ADS)

    Yazid, N. M.; Din, A. H. M.; Omar, K. M.; Som, Z. A. M.; Omar, A. H.; Yahaya, N. A. Z.; Tugi, A.

    2016-09-01

    Global geopotential models (GGMs) are vital in computing global geoid undulations heights. Based on the ellipsoidal height by Global Navigation Satellite System (GNSS) observations, the accurate orthometric height can be calculated by adding precise and accurate geoid undulations model information. However, GGMs also provide data from the satellite gravity missions such as GRACE, GOCE and CHAMP. Thus, this will assist to enhance the global geoid undulations data. A statistical assessment has been made between geoid undulations derived from 4 GGMs and the airborne gravity data provided by Department of Survey and Mapping Malaysia (DSMM). The goal of this study is the selection of the best possible GGM that best matches statistically with the geoid undulations of airborne gravity data under the Marine Geodetic Infrastructures in Malaysian Waters (MAGIC) Project over marine areas in Sabah. The correlation coefficients and the RMS value for the geoid undulations of GGM and airborne gravity data were computed. The correlation coefficients between EGM 2008 and airborne gravity data is 1 while RMS value is 0.1499.In this study, the RMS value of EGM 2008 is the lowest among the others. Regarding to the statistical analysis, it clearly represents that EGM 2008 is the best fit for marine geoid undulations throughout South China Sea.

  12. Calculation of water equivalent thickness of materials of arbitrary density, elemental composition and thickness in proton beam irradiation

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Newhauser, Wayne D.

    2009-03-01

    In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.

  13. Analytical study to define a helicopter stability derivative extraction method, volume 1

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1973-01-01

    A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.

  14. Computation of Nonlinear Backscattering Using a High-Order Numerical Method

    NASA Technical Reports Server (NTRS)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2001-01-01

    The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.

  15. Experimentally Derived Beta Corrections to Accurately Model the Fatigue Crack Growth Behavior at Cold Expanded Holes in 2024-T351 Aluminum Alloys

    DTIC Science & Technology

    2008-08-01

    was attached to the upper grip and the runout was checked as the lower grip was rotated. The maximum runout was recorded as 0.021 inch. Due to the...difficulties with adjusting the Interlaken test frame, this runout value was noted but considered acceptable. 2.3 Fatigue Testing Procedures...easier to polish the specimen to a mirror finish. The polishing of the specimen was performed using an electric Dremel with a polishing wheel attached

  16. Bayesian operational modal analysis with asynchronous data, Part II: Posterior uncertainty

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Chen; Au, Siu-Kui

    2018-01-01

    A Bayesian modal identification method has been proposed in the companion paper that allows the most probable values of modal parameters to be determined using asynchronous ambient vibration data. This paper investigates the identification uncertainty of modal parameters in terms of their posterior covariance matrix. Computational issues are addressed. Analytical expressions are derived to allow the posterior covariance matrix to be evaluated accurately and efficiently. Synthetic, laboratory and field data examples are presented to verify the consistency, investigate potential modelling error and demonstrate practical applications.

  17. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms are discussed especially for fiber-reinforced composites. PMID:25620817

  18. Open cluster Dolidze 25: Stellar parameters and the metallicity in the Galactic anticentre

    NASA Astrophysics Data System (ADS)

    Negueruela, I.; Simón-Díaz, S.; Lorenzo, J.; Castro, N.; Herrero, A.

    2015-12-01

    Context. The young open cluster Dolidze 25, in the direction of the Galactic anticentre, has been attributed a very low metallicity, with typical abundances between -0.5 and -0.7 dex below solar. Aims: We intend to derive accurate cluster parameters and accurate stellar abundances for some of its members. Methods: We have obtained a large sample of intermediate- and high-resolution spectra for stars in and around Dolidze 25. We used the fastwind code to generate stellar atmosphere models to fit the observed spectra. We derive stellar parameters for a large number of OB stars in the area, and abundances of oxygen and silicon for a number of stars with spectral types around B0. Results: We measure low abundances in stars of Dolidze 25. For the three stars with spectral types around B0, we find 0.3 dex (Si) and 0.5 dex (O) below the values typical in the solar neighbourhood. These values, even though not as low as those given previously, confirm Dolidze 25 and the surrounding H ii region Sh2-284 as the most metal-poor star-forming environment known in the Milky Way. We derive a distance 4.5 ± 0.3 kpc to the cluster (rG ≈ 12.3 kpc). The cluster cannot be older than ~3 Myr, and likely is not much younger. One star in its immediate vicinity, sharing the same distance, has Si and O abundances at most 0.15 dex below solar. Conclusions: The low abundances measured in Dolidze 25 are compatible with currently accepted values for the slope of the Galactic metallicity gradient, if we take into account that variations of at least ±0.15 dex are observed at a given radius. The area traditionally identified as Dolidze 25 is only a small part of a much larger star-forming region that comprises the whole dust shell associated with Sh2-284 and very likely several other smaller H ii regions in its vicinity. Based on observations made with the Nordic Optical Telescope, the Mercator Telescope, and the telescopes of the Isaac Newton Group.

  19. Landsat phenological metrics and their relation to aboveground carbon in the Brazilian Savanna.

    PubMed

    Schwieder, M; Leitão, P J; Pinto, J R R; Teixeira, A M C; Pedroni, F; Sanchez, M; Bustamante, M M; Hostert, P

    2018-05-15

    The quantification and spatially explicit mapping of carbon stocks in terrestrial ecosystems is important to better understand the global carbon cycle and to monitor and report change processes, especially in the context of international policy mechanisms such as REDD+ or the implementation of Nationally Determined Contributions (NDCs) and the UN Sustainable Development Goals (SDGs). Especially in heterogeneous ecosystems, such as Savannas, accurate carbon quantifications are still lacking, where highly variable vegetation densities occur and a strong seasonality hinders consistent data acquisition. In order to account for these challenges we analyzed the potential of land surface phenological metrics derived from gap-filled 8-day Landsat time series for carbon mapping. We selected three areas located in different subregions in the central Brazil region, which is a prominent example of a Savanna with significant carbon stocks that has been undergoing extensive land cover conversions. Here phenological metrics from the season 2014/2015 were combined with aboveground carbon field samples of cerrado sensu stricto vegetation using Random Forest regression models to map the regional carbon distribution and to analyze the relation between phenological metrics and aboveground carbon. The gap filling approach enabled to accurately approximate the original Landsat ETM+ and OLI EVI values and the subsequent derivation of annual phenological metrics. Random Forest model performances varied between the three study areas with RMSE values of 1.64 t/ha (mean relative RMSE 30%), 2.35 t/ha (46%) and 2.18 t/ha (45%). Comparable relationships between remote sensing based land surface phenological metrics and aboveground carbon were observed in all study areas. Aboveground carbon distributions could be mapped and revealed comprehensible spatial patterns. Phenological metrics were derived from 8-day Landsat time series with a spatial resolution that is sufficient to capture gradual changes in carbon stocks of heterogeneous Savanna ecosystems. These metrics revealed the relationship between aboveground carbon and the phenology of the observed vegetation. Our results suggest that metrics relating to the seasonal minimum and maximum values were the most influential variables and bear potential to improve spatially explicit mapping approaches in heterogeneous ecosystems, where both spatial and temporal resolutions are critical.

  20. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  1. A comparative study of charge transfer inefficiency value and trap parameter determination techniques making use of an irradiated ESA-Euclid prototype CCD

    NASA Astrophysics Data System (ADS)

    Prod'homme, Thibaut; Verhoeve, P.; Kohley, R.; Short, A.; Boudin, N.

    2014-07-01

    The science objectives of space missions using CCDs to carry out accurate astronomical measurements are put at risk by the radiation-induced increase in charge transfer inefficiency (CTI) that results from trapping sites in the CCD silicon lattice. A variety of techniques are used to obtain CTI values and derive trap parameters, however they often differ in results. To identify and understand these differences, we take advantage of an on-going comprehensive characterisation of an irradiated Euclid prototype CCD including the following techniques: X-ray, trap pumping, flat field extended pixel edge response and first pixel response. We proceed to a comparative analysis of the obtained results.

  2. Tensile stress-strain behavior of graphite/epoxy laminates

    NASA Technical Reports Server (NTRS)

    Garber, D. P.

    1982-01-01

    The tensile stress-strain behavior of a variety of graphite/epoxy laminates was examined. Longitudinal and transverse specimens from eleven different layups were monotonically loaded in tension to failure. Ultimate strength, ultimate strain, and strss-strain curves wee obtained from four replicate tests in each case. Polynominal equations were fitted by the method of least squares to the stress-strain data to determine average curves. Values of Young's modulus and Poisson's ratio, derived from polynomial coefficients, were compared with laminate analysis results. While the polynomials appeared to accurately fit the stress-strain data in most cases, the use of polynomial coefficients to calculate elastic moduli appeared to be of questionable value in cases involving sharp changes in the slope of the stress-strain data or extensive scatter.

  3. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  4. Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y

    Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less

  5. [Application of droplet digital PCR for non-invasive prenatal diagnosis of single gene disease in two families].

    PubMed

    Xu, Peiwen; Zou, Yang; Li, Jie; Huang, Sexin; Gao, Ming; Kang, Ranran; Xie, Hongqiang; Wang, Lijuan; Yan, Junhao; Gao, Yuan

    2018-04-10

    To assess the value of droplet digital PCR (ddPCR) for non-invasive prenatal diagnosis of single gene disease in two families. Paternal mutation in cell-free DNA derived from the maternal blood and amniotic fluid DNA was detected by ddPCR. Suspected mutation in the amniotic fluid DNA was verified with Sanger sequencing. The result of ddPCR and Sanger sequencing indicated that the fetuses have carried pathogenic mutations from the paternal side in both families. Droplet digital PCR can accurately detect paternal mutation carried by the fetus, and it is sensitive and reliable for analyzing trace samples. This method may be applied for the diagnosis of single gene diseases caused by paternal mutation using peripheral blood sample derived from the mother.

  6. The Compressibility of a Natural Kyanite at 300 K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X.; Shieh, S; Fleet, M

    2009-01-01

    The compressional behaviour of a natural kyanite, (Al{sub 1.99}Fe{sub 0.01})SiO{sub 5}, has been investigated to about 17.5 GPa at 300 K using a diamond-anvil cell and synchrotron X-ray diffraction. The pressure-volume data fitted to the third-order Birch-Murnaghan equation of state (EoS) yield an isothermal bulk modulus (K{sub 0T}) of 192 {+-} 6 GPa and pressure derivative (K'{sub 0T}) of 6 {+-} 1. When K'{sub 0T} is fixed as 4, the derived K{sub 0T} is 201 {+-} 2 GPa. These values are in excellent agreement with most experimental determinations in the literature. Consequently, it can be concluded that the compressibility ofmore » kyanite under high pressures has been accurately constrained.« less

  7. A Neural Network Model for K(λ) Retrieval and Application to Global K par Monitoring

    PubMed Central

    Chen, Jun; Zhu, Yuanli; Wu, Yongsheng; Cui, Tingwei; Ishizaka, Joji; Ju, Yongtao

    2015-01-01

    Accurate estimation of diffuse attenuation coefficients in the visible wavelengths K d(λ) from remotely sensed data is particularly challenging in global oceanic and coastal waters. The objectives of the present study are to evaluate the applicability of a semi-analytical K d(λ) retrieval model (SAKM) and Jamet’s neural network model (JNNM), and then develop a new neural network K d(λ) retrieval model (NNKM). Based on the comparison of K d(λ) predicted by these models with in situ measurements taken from the global oceanic and coastal waters, all of the NNKM, SAKM, and JNNM models work well in K d(λ) retrievals, but the NNKM model works more stable and accurate than both SAKM and JNNM models. The near-infrared band-based and shortwave infrared band-based combined model is used to remove the atmospheric effects on MODIS data. The K d(λ) data was determined from the atmospheric corrected MODIS data using the NNKM, JNNM, and SAKM models. The results show that the NNKM model produces <30% uncertainty in deriving K d(λ) from global oceanic and coastal waters, which is 4.88-17.18% more accurate than SAKM and JNNM models. Furthermore, we employ an empirical approach to calculate K par from the NNKM model-derived diffuse attenuation coefficient at visible bands (443, 488, 555, and 667 nm). The results show that our model presents a satisfactory performance in deriving K par from the global oceanic and coastal waters with 20.2% uncertainty. The K par are quantified from MODIS data atmospheric correction using our model. Comparing with field measurements, our model produces ~31.0% uncertainty in deriving K par from Bohai Sea. Finally, the applicability of our model for general oceanographic studies is briefly illuminated by applying it to climatological monthly mean remote sensing reflectance for time ranging from July, 2002- July 2014 at the global scale. The results indicate that the high K d(λ) and K par values are usually found around the coastal zones in the high latitude regions, while low K d(λ) and K par values are usually found in the open oceans around the low-latitude regions. These results could improve our knowledge about the light field under waters at either the global or basin scales, and be potentially used into general circulation models to estimate the heat flux between atmosphere and ocean. PMID:26083341

  8. Unipolar Endocardial Voltage Mapping in the Right Ventricle: Optimal Cutoff Values Correcting for Computed Tomography-Derived Epicardial Fat Thickness and Their Clinical Value for Substrate Delineation.

    PubMed

    Venlet, Jeroen; Piers, Sebastiaan R D; Kapel, Gijsbert F L; de Riva, Marta; Pauli, Philippe F G; van der Geest, Rob J; Zeppenfeld, Katja

    2017-08-01

    Low endocardial unipolar voltage (UV) at sites with normal bipolar voltage (BV) may indicate epicardial scar. Currently applied UV cutoff values are based on studies that lacked epicardial fat information. This study aimed to define endocardial UV cutoff values using computed tomography-derived fat information and to analyze their clinical value for right ventricular substrate delineation. Thirty-three patients (50±14 years; 79% men) underwent combined endocardial-epicardial right ventricular electroanatomical mapping and ablation of right ventricular scar-related ventricular tachycardia with computed tomographic image integration, including computed tomography-derived fat thickness. Of 6889 endocardial-epicardial mapping point pairs, 547 (8%) pairs with distance <10 mm and fat thickness <1.0 mm were analyzed for voltage and abnormal (fragmented/late potential) electrogram characteristics. At sites with endocardial BV >1.50 mV, the optimal endocardial UV cutoff for identification of epicardial BV <1.50 mV was 3.9 mV (area under the curve, 0.75; sensitivity, 60%; specificity, 79%) and cutoff for identification of abnormal epicardial electrogram was 3.7 mV (area under the curve, 0.88; sensitivity, 100%; specificity, 67%). The majority of abnormal electrograms (130 of 151) were associated with transmural scar. Eighty-six percent of abnormal epicardial electrograms had corresponding endocardial sites with BV <1.50 mV, and the remaining could be identified by corresponding low endocardial UV <3.7 mV. For identification of epicardial right ventricular scar, an endocardial UV cutoff value of 3.9 mV is more accurate than previously reported cutoff values. Although the majority of epicardial abnormal electrograms are associated with transmural scar with low endocardial BV, the additional use of endocardial UV at normal BV sites improves the diagnostic accuracy resulting in identification of all epicardial abnormal electrograms at sites with <1.0 mm fat. © 2017 American Heart Association, Inc.

  9. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  10. Derivation of guideline values for gold (III) ion toxicity limits to protect aquatic ecosystems.

    PubMed

    Nam, Sun-Hwa; Lee, Woo-Mi; Shin, Yu-Jin; Yoon, Sung-Ji; Kim, Shin Woong; Kwak, Jin Il; An, Youn-Joo

    2014-01-01

    This study focused on estimating the toxicity values of various aquatic organisms exposed to gold (III) ion (Au(3+)), and to propose maximum guideline values for Au(3+) toxicity that protect the aquatic ecosystem. A comparative assessment of methods developed in Australia and New Zealand versus the European Community (EC) was conducted. The test species used in this study included two bacteria (Escherichia coli and Bacillus subtilis), one alga (Pseudokirchneriella subcapitata), one euglena (Euglena gracilis), three cladocerans (Daphnia magna, Moina macrocopa, and Simocephalus mixtus), and two fish (Danio rerio and Oryzias latipes). Au(3+) induced growth inhibition, mortality, immobilization, and/or developmental malformations in all test species, with responses being concentration-dependent. According to the moderate reliability method of Australia and New Zealand, 0.006 and 0.075 mg/L of guideline values for Au(3+) were obtained by dividing 0.33 and 4.46 mg/L of HC5 and HC50 species sensitivity distributions (SSD) with an FACR (Final Acute to Chronic Ratio) of 59.09. In contrast, the EC method uses an assessment factor (AF), with the 0.0006 mg/L guideline value for Au(3+) being divided with the 48-h EC50 value for 0.60 mg/L (the lowest toxicity value obtained from short term results) by an AF of 1000. The Au(3+) guideline value derived using an AF was more stringent than the SSD. We recommend that more toxicity data using various bioassays are required to develop more accurate ecological risk assessments. More chronic/long-term exposure studies on sensitive endpoints using additional fish species and invertebrates not included in the current dataset will be needed to use other derivation methods (e.g., US EPA and Canadian Type A) or the "High Reliability Method" from Australia/New Zealand. Such research would facilitate the establishment of guideline values for various pollutants that reflect the universal effects of various pollutants in aquatic ecosystems. To the best of our knowledge, this is the first study to suggest guideline values for Au(3+) levels permitted to enter freshwater environments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. 3D surface voxel tracing corrector for accurate bone segmentation.

    PubMed

    Guo, Haoyan; Song, Sicong; Wang, Jinke; Guo, Maozu; Cheng, Yuanzhi; Wang, Yadong; Tamura, Shinichi

    2018-06-18

    For extremely close bones, their boundaries are weak and diffused due to strong interaction between adjacent surfaces. These factors prevent the accurate segmentation of bone structure. To alleviate these difficulties, we propose an automatic method for accurate bone segmentation. The method is based on a consideration of the 3D surface normal direction, which is used to detect the bone boundary in 3D CT images. Our segmentation method is divided into three main stages. Firstly, we consider a surface tracing corrector combined with Gaussian standard deviation [Formula: see text] to improve the estimation of normal direction. Secondly, we determine an optimal value of [Formula: see text] for each surface point during this normal direction correction. Thirdly, we construct the 1D signal and refining the rough boundary along the corrected normal direction. The value of [Formula: see text] is used in the first directional derivative of the Gaussian to refine the location of the edge point along accurate normal direction. Because the normal direction is corrected and the value of [Formula: see text] is optimized, our method is robust to noise images and narrow joint space caused by joint degeneration. We applied our method to 15 wrists and 50 hip joints for evaluation. In the wrist segmentation, Dice overlap coefficient (DOC) of [Formula: see text]% was obtained by our method. In the hip segmentation, fivefold cross-validations were performed for two state-of-the-art methods. Forty hip joints were used for training in two state-of-the-art methods, 10 hip joints were used for testing and performing comparisons. The DOCs of [Formula: see text], [Formula: see text]%, and [Formula: see text]% were achieved by our method for the pelvis, the left femoral head and the right femoral head, respectively. Our method was shown to improve segmentation accuracy for several specific challenging cases. The results demonstrate that our approach achieved a superior accuracy over two state-of-the-art methods.

  12. An algorithm to extract more accurate stream longitudinal profiles from unfilled DEMs

    NASA Astrophysics Data System (ADS)

    Byun, Jongmin; Seong, Yeong Bae

    2015-08-01

    Morphometric features observed from a stream longitudinal profile (SLP) reflect channel responses to lithological variation and changes in uplift or climate; therefore, they constitute essential indicators in the studies for the dynamics between tectonics, climate, and surface processes. The widespread availability of digital elevation models (DEMs) and their processing enable semi-automatic extraction of SLPs as well as additional stream profile parameters, thus reducing the time spent for extracting them and simultaneously allowing regional-scale studies of SLPs. However, careful consideration is required to extract SLPs directly from a DEM, because the DEM must be altered by depression filling process to ensure the continuity of flows across it. Such alteration inevitably introduces distortions to the SLP, such as stair steps, bias of elevation values, and inaccurate stream paths. This paper proposes a new algorithm, called maximum depth tracing algorithm (MDTA), to extract more accurate SLPs using depression-unfilled DEMs. The MDTA supposes that depressions in DEMs are not necessarily artifacts to be removed, and that elevation values within them are useful to represent more accurately the real landscape. To ensure the continuity of flows even across the unfilled DEM, the MDTA first determines the outlet of each depression and then reverses flow directions of the cells on the line of maximum depth within each depression, beginning from the outlet and toward the sink. It also calculates flow accumulation without disruption across the unfilled DEM. Comparative analysis with the profiles extracted by the hydrologic functions implemented in the ArcGIS™ was performed to illustrate the benefits from the MDTA. It shows that the MDTA provides more accurate stream paths on depression areas, and consequently reduces distortions of the SLPs derived from the paths, such as exaggerated elevation values and negatively biased slopes that are commonly observed in the SLPs built using the ArcGIS™. The algorithm proposed here, therefore, could aid all the studies requiring more reliable stream paths and SLPs from DEMs.

  13. Statistical analysis of radiation dose derived from ingestion of foods

    NASA Astrophysics Data System (ADS)

    Dougherty, Ward L.

    2001-09-01

    This analysis undertook the task of designing and implementing a methodology to determine an individual's probabilistic radiation dose from ingestion of foods utilizing Crystal Ball. A dietary intake model was determined by comparing previous existing models. Two principal radionuclides were considered-Lead210 (Pb-210) and Radium 226 (Ra-226). Samples from three different local grocery stores-Publix, Winn Dixie, and Albertsons-were counted on a gamma spectroscopy system with a GeLi detector. The same food samples were considered as those in the original FIPR database. A statistical analysis, utilizing the Crystal Ball program, was performed on the data to assess the most accurate distribution to use for these data. This allowed a determination of a radiation dose to an individual based on the above-information collected. Based on the analyses performed, radiation dose for grocery store samples was lower for Radium-226 than FIPR debris analyses, 2.7 vs. 5.91 mrem/yr. Lead-210 had a higher dose in the grocery store sample than the FIPR debris analyses, 21.4 vs. 518 mrem/yr. The output radiation dose was higher for all evaluations when an accurate estimation of distributions for each value was considered. Radium-226 radiation dose for FIPR and grocery rose to 9.56 and 4.38 mrem/yr. Radiation dose from ingestion of Pb-210 rose to 34.7 and 854 mrem/yr for FIPR and grocery data, respectively. Lead-210 was higher than initial doses for many reasons: Different peak examined, lower edge of detection limit, and minimum detectable concentration was considered. FIPR did not utilize grocery samples as a control because they calculated radiation dose that appeared unreasonably high. Consideration of distributions with the initial values allowed reevaluation of radiation does and showed a significant difference to original deterministic values. This work shows the value and importance of considering distributions to ensure that a person's radiation dose is accurately calculated. Probabilistic dose methodology was proved to be a more accurate and realistic method of radiation dose determination. This type of methodology provides a visual presentation of dose distribution that can be a vital aid in risk methodology.

  14. Absolute and relative height-pixel accuracy of SRTM-GL1 over the South American Andean Plateau

    NASA Astrophysics Data System (ADS)

    Satge, Frédéric; Denezine, Matheus; Pillco, Ramiro; Timouk, Franck; Pinel, Sébastien; Molina, Jorge; Garnier, Jérémie; Seyler, Frédérique; Bonnet, Marie-Paule

    2016-11-01

    Previously available only over the Continental United States (CONUS), the 1 arc-second mesh size (spatial resolution) SRTM-GL1 (Shuttle Radar Topographic Mission - Global 1) product has been freely available worldwide since November 2014. With a relatively small mesh size, this digital elevation model (DEM) provides valuable topographic information over remote regions. SRTM-GL1 is assessed for the first time over the South American Andean Plateau in terms of both the absolute and relative vertical point-to-point accuracies at the regional scale and for different slope classes. For comparison, SRTM-v4 and GDEM-v2 Global DEM version 2 (GDEM-v2) generated by ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) are also considered. A total of approximately 160,000 ICESat/GLAS (Ice, Cloud and Land Elevation Satellite/Geoscience Laser Altimeter System) data are used as ground reference measurements. Relative error is often neglected in DEM assessments due to the lack of reference data. A new methodology is proposed to assess the relative accuracies of SRTM-GL1, SRTM-v4 and GDEM-v2 based on a comparison with ICESat/GLAS measurements. Slope values derived from DEMs and ICESat/GLAS measurements from approximately 265,000 ICESat/GLAS point pairs are compared using quantitative and categorical statistical analysis introducing a new index: the False Slope Ratio (FSR). Additionally, a reference hydrological network is derived from Google Earth and compared with river networks derived from the DEMs to assess each DEM's potential for hydrological applications over the region. In terms of the absolute vertical accuracy on a global scale, GDEM-v2 is the most accurate DEM, while SRTM-GL1 is more accurate than SRTM-v4. However, a simple bias correction makes SRTM-GL1 the most accurate DEM over the region in terms of vertical accuracy. The relative accuracy results generally did not corroborate the absolute vertical accuracy. GDEM-v2 presents the lowest statistical results based on the relative accuracy, while SRTM-GL1 is the most accurate. Vertical accuracy and relative accuracy are two independent components that must be jointly considered when assessing a DEM's potential. DEM accuracies increased with slope. In terms of hydrological potential, SRTM products are more accurate than GDEM-v2. However, the DEMs exhibit river extraction limitations over the region due to the low regional slope gradient.

  15. The effects of temperature and pressure on airborne exposure concentrations when performing compliance evaluations using ACGIH TLVs and OSHA PELs.

    PubMed

    Stephenson, D J; Lillquist, D R

    2001-04-01

    Occupational hygienists perform air sampling to characterize airborne contaminant emissions, assess occupational exposures, and establish allowable workplace airborne exposure concentrations. To perform these air sampling applications, occupational hygienists often compare an airborne exposure concentration to a corresponding American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) or an Occupational Safety and Health Administration (OSHA) permissible exposure limit (PEL). To perform such comparisons, one must understand the physiological assumptions used to establish these occupational exposure limits, the relationship between a workplace airborne exposure concentration and its associated TLV or PEL, and the effect of temperature and pressure on the performance of an accurate compliance evaluation. This article illustrates the correct procedure for performing compliance evaluations using airborne exposure concentrations expressed in both parts per million and milligrams per cubic meter. In so doing, a brief discussion is given on the physiological assumptions used to establish TLVs and PELs. It is further shown how an accurate compliance evaluation is fundamentally based on comparison of a measured work site exposure dose (derived from the sampling site exposure concentration estimate) to an estimated acceptable exposure dose (derived from the occupational exposure limit concentration). In addition, this article correctly illustrates the effect that atmospheric temperature and pressure have on airborne exposure concentrations and the eventual performance of a compliance evaluation. This article also reveals that under fairly moderate conditions of temperature and pressure, 30 degrees C and 670 torr, a misunderstanding of how varying atmospheric conditions affect concentration values can lead to a 15 percent error in assessing compliance.

  16. Emotion and decision making: multiple modulatory neural circuits.

    PubMed

    Phelps, Elizabeth A; Lempert, Karolina M; Sokol-Hessner, Peter

    2014-01-01

    Although the prevalent view of emotion and decision making is derived from the notion that there are dual systems of emotion and reason, a modulatory relationship more accurately reflects the current research in affective neuroscience and neuroeconomics. Studies show two potential mechanisms for affect's modulation of the computation of subjective value and decisions. Incidental affective states may carry over to the assessment of subjective value and the decision, and emotional reactions to the choice may be incorporated into the value calculation. In addition, this modulatory relationship is reciprocal: Changing emotion can change choices. This research suggests that the neural mechanisms mediating the relation between affect and choice vary depending on which affective component is engaged and which decision variables are assessed. We suggest that a detailed and nuanced understanding of emotion and decision making requires characterizing the multiple modulatory neural circuits underlying the different means by which emotion and affect can influence choices.

  17. Trabecular bone score (TBS): Method and applications.

    PubMed

    Martineau, P; Leslie, W D

    2017-11-01

    Trabecular bone score (TBS) is a texture index derived from standard lumbar spine dual energy X-ray absorptiometry (DXA) images and provides information about the underlying bone independent of the bone mineral density (BMD). Several salient observations have emerged. Numerous studies have examined the relationship between TBS and fracture risk and have shown that lower TBS values are associated with increased risk for major osteoporotic fracture in postmenopausal women and older men, with this result being independent of BMD values and other clinical risk factors. Therefore, despite being derived from standard DXA images, the information contained in TBS is independent and complementary to the information provided by BMD and the FRAX® tool. A procedure to generate TBS-adjusted FRAX probabilities has become available with the resultant predicted fracture risks shown to be more accurate than the standard FRAX tool. With these developments, TBS has emerged as a clinical tool for improved fracture risk prediction and guiding decisions regarding treatment initiation, particularly for patients with FRAX probabilities around an intervention threshold. In this article, we review the development, validation, clinical application, and limitations of TBS. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  19. On the Use of Topside RO-Derived Electron Density for Model Validation

    NASA Astrophysics Data System (ADS)

    Shaikh, M. M.; Nava, B.; Haralambous, H.

    2018-05-01

    In this work, the standard Abel inversion has been exploited as a powerful observation tool, which may be helpful to model the topside of the ionosphere and therefore to validate ionospheric models. A thorough investigation on the behavior of radio occultation (RO)-derived topside electron density (Ne(h))-profiles has therefore been performed with the main purpose to understand whether it is possible to predict the accuracy of a single RO-retrieved topside by comparing the peak density and height of the retrieved profile to the true values. As a first step, a simulation study based on the use of the NeQuick2 model has been performed to show that when the RO-derived electron density peak and height match the true peak values, the full topside Ne(h)-profile may be considered accurate. In order to validate this hypothesis with experimental data, electron density profiles obtained from four different incoherent scatter radars have therefore been considered together with co-located RO-derived Ne(h)-profiles. The evidence presented in this paper show that in all cases examined, if the incoherent scatter radar and the corresponding co-located RO profile have matching peak parameter values, their topsides are in very good agreement. The simulation results presented in this work also highlighted the importance of considering the occultation plane azimuth while inverting RO data to obtain Ne(h)-profile. In particular, they have indicated that there is a preferred range of azimuths of the occultation plane (80°-100°) for which the difference between the "true" and the RO-retrieved Ne(h)-profile in the topside is generally minimal.

  20. Assessment of body fat in the pony: part II. Validation of the deuterium oxide dilution technique for the measurement of body fat.

    PubMed

    Dugdale, A H A; Curtis, G C; Milne, E; Harris, P A; Argo, C Mc

    2011-09-01

    Excessive accumulations or depletions of body fat have been associated with increased morbidity and mortality in horses and ponies. An objective, minimally-invasive method to accurately quantify body fat in living animals is required to aid nutritional management and define welfare/performance limits. To compare deuterium oxide (D(2) O) dilution-derived estimates of total body water (TBW) and body fat with values obtained by 'gold standard' proximate analysis and cadaver dissection. D(2) O dilution offers a valid method for the determination of TBW and body fat in equids. Seven mature (mean ± s.e. 13 ± 3 years, 212 ± 14 kg, body condition scores 1.25-7/9), healthy, Welsh Mountain pony mares, destined for euthanasia (for nonresearch purposes) were used. Blood samples were collected before and 4 h after D(2) O (0.11-0.13 g/kg bwt, 99.8 atom percent excess) administration. Plasma was analysed by gas isotope ratio mass spectrometry following filtration and zinc reduction. After euthanasia, white adipose tissue (WAT) mass was recorded before all body tissues were analysed by proximate chemical analyses. D(2) O-derived estimates of TBW and body fat were strongly associated with proximate analysis- and dissection-derived values (all r(2) >0.97, P≤0.0001). Bland-Altman analyses demonstrated good agreements between methods. D(2) O dilution slightly overestimated TBW (0.79%, limits of agreement (LoA) -3.75-2.17%) and underestimated total body lipid (1.78%, LoA -0.59-4.15%) and dissected WAT (0.72%, LoA -2.77-4.21%). This study provides the first validation of the D(2) O dilution method for the minimally-invasive, accurate, repeatable and objective measurement of body water and fat in living equids. © 2011 EVJ Ltd.

  1. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  2. Refining Ovarian Cancer Test accuracy Scores (ROCkeTS): protocol for a prospective longitudinal test accuracy study to validate new risk scores in women with symptoms of suspected ovarian cancer

    PubMed Central

    Sundar, Sudha; Rick, Caroline; Dowling, Francis; Au, Pui; Rai, Nirmala; Champaneria, Rita; Stobart, Hilary; Neal, Richard; Davenport, Clare; Mallett, Susan; Sutton, Andrew; Kehoe, Sean; Timmerman, Dirk; Bourne, Tom; Van Calster, Ben; Gentry-Maharaj, Aleksandra; Deeks, Jon

    2016-01-01

    Introduction Ovarian cancer (OC) is associated with non-specific symptoms such as bloating, making accurate diagnosis challenging: only 1 in 3 women with OC presents through primary care referral. National Institute for Health and Care Excellence guidelines recommends sequential testing with CA125 and routine ultrasound in primary care. However, these diagnostic tests have limited sensitivity or specificity. Improving accurate triage in women with vague symptoms is likely to improve mortality by streamlining referral and care pathways. The Refining Ovarian Cancer Test Accuracy Scores (ROCkeTS; HTA 13/13/01) project will derive and validate new tests/risk prediction models that estimate the probability of having OC in women with symptoms. This protocol refers to the prospective study only (phase III). Methods and analysis ROCkeTS comprises four parallel phases. The full ROCkeTS protocol can be found at http://www.birmingham.ac.uk/ROCKETS. Phase III is a prospective test accuracy study. The study will recruit 2450 patients from 15 UK sites. Recruited patients complete symptom and anxiety questionnaires, donate a serum sample and undergo ultrasound scored as per International Ovarian Tumour Analysis (IOTA) criteria. Recruitment is at rapid access clinics, emergency departments and elective clinics. Models to be evaluated include those based on ultrasound derived by the IOTA group and novel models derived from analysis of existing data sets. Estimates of sensitivity, specificity, c-statistic (area under receiver operating curve), positive predictive value and negative predictive value of diagnostic tests are evaluated and a calibration plot for models will be presented. ROCkeTS has received ethical approval from the NHS West Midlands REC (14/WM/1241) and is registered on the controlled trials website (ISRCTN17160843) and the National Institute of Health Research Cancer and Reproductive Health portfolios. PMID:27507231

  3. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  4. Evaluating the Assumptions of Surface Reflectance and Aerosol Type Selection Within the MODIS Aerosol Retrieval Over Land: The Problem of Dust Type Selection

    NASA Technical Reports Server (NTRS)

    Mielonen, T.; Levy, R. C.; Aaltonen, V.; Komppula, M.; de Leeuw, G.; Huttunen, J.; Lihavainen, H.; Kolmonen, P.; Lehtinen, K. E. J.; Arola, A.

    2011-01-01

    Aerosol Optical Depth (AOD) and Angstrom exponent (AE) values derived with the MODIS retrieval algorithm over land (Collection 5) are compared with ground based sun photometer measurements at eleven sites spanning the globe. Although, in general, total AOD compares well at these sites (R2 values generally over 0.8), there are cases (from 2 to 67% of the measurements depending on the site) where MODIS clearly retrieves the wrong spectral dependence, and hence, an unrealistic AE value. Some of these poor AE retrievals are due to the aerosol signal being too small (total AOD<0.3) but in other cases the AOD should have been high enough to derive accurate AE. However, in these cases, MODIS indicates AE values close to 0.6 and zero fine model weighting (FMW), i.e. dust model provides the best fitting to the MODIS observed reflectance. Yet, according to evidence from the collocated sun photometer measurements and back-trajectory analyses, there should be no dust present. This indicates that the assumptions about aerosol model and surface properties made by the MODIS algorithm may have been incorrect. Here we focus on problems related to parameterization of the land-surface optical properties in the algorithm, in particular the relationship between the surface reflectance at 660 and 2130 nm.

  5. New asteroseismic scaling relations based on the Hayashi track relation applied to red giant branch stars in NGC 6791 and NGC 6819

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, T.; Li, Y.; Hekker, S., E-mail: wutao@ynao.ac.cn, E-mail: ly@ynao.ac.cn, E-mail: hekker@mps.mpg.de

    2014-01-20

    Stellar mass M, radius R, and gravity g are important basic parameters in stellar physics. Accurate values for these parameters can be obtained from the gravitational interaction between stars in multiple systems or from asteroseismology. Stars in a cluster are thought to be formed coevally from the same interstellar cloud of gas and dust. The cluster members are therefore expected to have some properties in common. These common properties strengthen our ability to constrain stellar models and asteroseismically derived M, R, and g when tested against an ensemble of cluster stars. Here we derive new scaling relations based on amore » relation for stars on the Hayashi track (√(T{sub eff})∼g{sup p}R{sup q}) to determine the masses and metallicities of red giant branch stars in open clusters NGC 6791 and NGC 6819 from the global oscillation parameters Δν (the large frequency separation) and ν{sub max} (frequency of maximum oscillation power). The Δν and ν{sub max} values are derived from Kepler observations. From the analysis of these new relations we derive: (1) direct observational evidence that the masses of red giant branch stars in a cluster are the same within their uncertainties, (2) new methods to derive M and z of the cluster in a self-consistent way from Δν and ν{sub max}, with lower intrinsic uncertainties, and (3) the mass dependence in the Δν - ν{sub max} relation for red giant branch stars.« less

  6. Derivation and experimental verification of clock synchronization theory

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1994-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.

  7. Probing-models for interdigitated electrode systems with ferroelectric thin films

    NASA Astrophysics Data System (ADS)

    Nguyen, Cuong H.; Nigon, Robin; Raeder, Trygve M.; Hanke, Ulrik; Halvorsen, Einar; Muralt, Paul

    2018-05-01

    In this paper, a new method to characterize ferroelectric thin films with interdigitated electrodes is presented. To obtain accurate properties, all parasitic contributions should be subtracted from the measurement results and accurate models for the ferroelectric film are required. Hence, we introduce a phenomenological model for the parasitic capacitance. Moreover, two common analytical models based on conformal transformations are compared and used to calculate the capacitance and the electric field. With a thin film approximation, new simplified electric field and capacitance formulas are derived. By using these formulas, more consistent CV, PV and stress-field loops for samples with different geometries are obtained. In addition, an inhomogeneous distribution of the permittivity due to the non-uniform electric field is modelled by finite element simulation in an iterative way. We observed that this inhomogeneous distribution can be treated as a homogeneous one with an effective value of the permittivity.

  8. Accurate oscillator strengths for ultraviolet lines of Ar I - Implications for interstellar material

    NASA Technical Reports Server (NTRS)

    Federman, S. R.; Beideck, D. J.; Schectman, R. M.; York, D. G.

    1992-01-01

    Analysis of absorption from interstellar Ar I in lightly reddened lines of sight provides information on the warm and hot components of the interstellar medium near the sun. The details of the analysis are limited by the quality of the atomic data. Accurate oscillator strengths for the Ar I lines at 1048 and 1067 A and the astrophysical implications are presented. From lifetimes measured with beam-foil spectroscopy, an f-value for 1048 A of 0.257 +/- 0.013 is obtained. Through the use of a semiempirical formalism for treating singlet-triplet mixing, an oscillator strength of 0.064 +/- 0.003 is derived for 1067 A. Because of the accuracy of the results, the conclusions of York and colleagues from spectra taken with the Copernicus satellite are strengthened. In particular, for interstellar gas in the solar neighborhood, argon has a solar abundance, and the warm, neutral material is not pervasive.

  9. Estimating stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping

    2016-09-01

    Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.

  10. The Stroke Assessment of Fall Risk (SAFR): predictive validity in inpatient stroke rehabilitation.

    PubMed

    Breisinger, Terry P; Skidmore, Elizabeth R; Niyonkuru, Christian; Terhorst, Lauren; Campbell, Grace B

    2014-12-01

    To evaluate relative accuracy of a newly developed Stroke Assessment of Fall Risk (SAFR) for classifying fallers and non-fallers, compared with a health system fall risk screening tool, the Fall Harm Risk Screen. Prospective quality improvement study conducted at an inpatient stroke rehabilitation unit at a large urban university hospital. Patients admitted for inpatient stroke rehabilitation (N = 419) with imaging or clinical evidence of ischemic or hemorrhagic stroke, between 1 August 2009 and 31 July 2010. Not applicable. Sensitivity, specificity, and area under the curve for Receiver Operating Characteristic Curves of both scales' classifications, based on fall risk score completed upon admission to inpatient stroke rehabilitation. A total of 68 (16%) participants fell at least once. The SAFR was significantly more accurate than the Fall Harm Risk Screen (p < 0.001), with area under the curve of 0.73, positive predictive value of 0.29, and negative predictive value of 0.94. For the Fall Harm Risk Screen, area under the curve was 0.56, positive predictive value was 0.19, and negative predictive value was 0.86. Sensitivity and specificity of the SAFR (0.78 and 0.63, respectively) was higher than the Fall Harm Risk Screen (0.57 and 0.48, respectively). An evidence-derived, population-specific fall risk assessment may more accurately predict fallers than a general fall risk screen for stroke rehabilitation patients. While the SAFR improves upon the accuracy of a general assessment tool, additional refinement may be warranted. © The Author(s) 2014.

  11. DETERMINING MOTOR INERTIA OF A STRESS-CONTROLLED RHEOMETER.

    PubMed

    Klemuk, Sarah A; Titze, Ingo R

    2009-01-01

    Viscoelastic measurements made with a stress-controlled rheometer are affected by system inertia. Of all contributors to system inertia, motor inertia is the largest. Its value is usually determined empirically and precision is rarely if ever specified. Inertia uncertainty has negligible effects on rheologic measurements below the coupled motor/plate/sample resonant frequency. But above the resonant frequency, G' values of soft viscoelastic materials such as dispersions, gels, biomaterials, and non-Newtonian polymers, err quadratically due to inertia uncertainty. In the present investigation, valid rheologic measurements were achieved near and above the coupled resonant frequency for a non-Newtonian reference material. At these elevated frequencies, accuracy in motor inertia is critical. Here we compare two methods for determining motor-inertia accurately. For the first (commercially-used) phase method, frequency responses of standard fluids were measured. Phase between G' and G" was analyzed at 5-70 Hz for motor inertia values of 50-150% of the manufacturer's nominal value. For a newly-devised two-plate method (10 mm and 60 mm parallel plates), dynamic measurements of a non-Newtonian standard were collected. Using a linear equation of motion with inertia, viscosity, and elasticity coefficients, G' expressions for both plates were equated and motor inertia was determined to be accurate (by comparison to the phase method) with a precision of ± 3%. The newly developed two-plate method had advantages of expressly eliminating dependence on gap, was explicitly derived from basic principles, quantified the error, and required fewer experiments than the commercially used phase method.

  12. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  13. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  14. Local magnitude scale for Valle Medio del Magdalena region, Colombia

    NASA Astrophysics Data System (ADS)

    Londoño, John Makario; Romero, Jaime A.

    2017-12-01

    A local Magnitude (ML) scale for Valle Medio del Magdalena (VMM) region was defined by using 514 high quality earthquakes located at VMM area and inversion of 2797 amplitude values of horizontal components of 17 stations seismic broad band stations, simulated in a Wood-Anderson seismograph. The derived local magnitude scale for VMM region was: ML =log(A) + 1.3744 ∗ log(r) + 0.0014776 ∗ r - 2.397 + S Where A is the zero-to-peak amplitude in nm in horizontal components, r is the hypocentral distance in km, and S is the station correction. Higher values of ML were obtained for VMM region compared with those obtained with the current formula used for ML determination, and with California formula. With this new scale ML values are adjusted to local conditions beneath VMM region leading to more realistic ML values. Moreover, with this new ML scale the seismicity caused by tectonic or fracking activity at VMM region can be monitored more accurately.

  15. Antimicrobial gageomacrolactins characterized from the fermentation of the marine-derived bacterium Bacillus subtilis under optimum growth conditions.

    PubMed

    Tareq, Fakir Shahidullah; Kim, Ji Hye; Lee, Min Ah; Lee, Hyi-Seung; Lee, Jong-Seok; Lee, Yeon-Ju; Shin, Hee Jae

    2013-04-10

    Marine bacteria are a potential source of structurally diversified bioactive secondary metabolites that are not found in terrestrial sources. In our continuous effort to search for new antimicrobial agents from marine-derived bacteria, we isolated bacterial strain 109GGC020 from a marine sediment sample collected from Gageocho, Republic of Korea. The strain was identified as Bacillus subtilis based on a 16s rRNA sequence analysis. After a 7-day fermentation of the B. subtilis strain under optimum growth conditions three new and four known secondary metabolites were discovered using chromatographic procedures, and their biological activities were evaluated against both bacteria and crop-devastating fungi. The discovered metabolites were confirmed by extensive 2D NMR and high-resolution ESI-MS data analyses to have the structures of new macrolactin derivatives gageomacrolactins 1-3 and known macrolactins A (4), B (5), F (6), and W (7). The stereoconfigurations of 1-3 were assigned based on coupling constant values, chemical derivatization studies, and a literature review. The coupling constants were very crucial to determine the relative geometries of olefins in 1-3 because of overlap of the ¹H NMR signals. The NMR data of these compounds were recorded in different solvents to overcome this problem and obtain accurate coupling constant values. The new macrolactin derivatives 1-3 displayed good antibiotic properties against both Gram-positive (S. aureus, B. subtilis, and B. cereus) and Gram-negative (E. coli, S. typhi, and P. aeruginosa) bacteria with minimum inhibitory concentration (MIC) values of 0.02-0.05 μM. Additionally, the antifungal activities of 1-7 were evaluated against pathogenic fungi and found to inhibit mycelial growth of A. niger, B. cinerea, C. acutatum, C. albicans, and R. solani with MIC values of 0.04-0.3 μM, demonstrating that these compounds were good fungicides.

  16. Improving Hurricane Heat Content Estimates From Satellite Altimeter Data

    NASA Astrophysics Data System (ADS)

    de Matthaeis, P.; Jacob, S.; Roubert, L. M.; Shay, N.; Black, P.

    2007-12-01

    Hurricanes are amongst the most destructive natural disasters known to mankind. The primary energy source driving these storms is the latent heat release due to the condensation of water vapor, which ultimately comes from the ocean. While the Sea Surface Temperature (SST) has a direct correlation with wind speeds, the oceanic heat content is dependent on the upper ocean vertical structure. Understanding the impact of these factors in the mutual interaction of hurricane-ocean is critical to more accurately forecasting intensity change in land-falling hurricanes. Use of hurricane heat content derived from the satellite radar altimeter measurements of sea surface height has been shown to improve intensity prediction. The general approach of estimating ocean heat content uses a two-layer model representing the ocean with its anomalies derived from altimeter data. Although these estimates compare reasonably well with in-situ measurements, they are generally about 10% under-biased. Additionally, recent studies show that the comparisons are less than satisfactory in the Western North Pacific. Therefore, our objective is to develop a methodology to more accurately represent the upper ocean structure using in-situ data. As part of a NOAA/ USWRP sponsored research, upper ocean observations were acquired in the Gulf of Mexico during the summers of 1999 and 2000. Overall, 260 expendable profilers (XCTD, XBT and XCP) acquired vertical temperature structure in the high heat content regions corresponding to the Loop Current and Warm Core Eddies. Using the temperature and salinity data from the XCTDs, first the Temperature-Salinity relationships in the Loop Current Water and Gulf Common water are derived based on the depth of the 26° C isotherm. These derived T-S relationships compare well with those inferred from climatology. By means of these relationships, estimated salinity values corresponding to the XBT and XCP temperature measurements are calculated, and used to derive continuous profiles of density. Ocean heat content is then estimated from these profiles, and compared to that derived from altimeter data, showing - as mentioned earlier - a consistent bias. Using a procedure that conserves density in the vertical, these density profiles are discretized into five isopycnic layers representative of the upper ocean in the Gulf of Mexico. Statistical correlations are then derived between the altimetric sea surface height anomalies and the thickness of these layers in the region. Using these correlations, a higher resolution upper ocean structure is derived from the altimeter data. Withholding observations from one snapshot of data in the correlations, and comparing the estimated ocean heat content with in-situ values, will allow us to quantify errors in this approach. This methodology will then be extended to the Western Pacific using Argo data, and results will be presented.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Carryn M., E-mail: carryn-anderson@uiowa.edu; Chang, Tangel; Graham, Michael M.

    Purpose: To evaluate dynamic [{sup 18}F]-fluorodeoxyglucose (FDG) uptake methodology as a post–radiation therapy (RT) response assessment tool, potentially enabling accurate tumor and therapy-related inflammation differentiation, improving the posttherapy value of FDG–positron emission tomography/computed tomography (FDG-PET/CT). Methods and Materials: We prospectively enrolled head-and-neck squamous cell carcinoma patients who completed RT, with scheduled 3-month post-RT FDG-PET/CT. Patients underwent our standard whole-body PET/CT scan at 90 minutes, with the addition of head-and-neck PET/CT scans at 60 and 120 minutes. Maximum standardized uptake values (SUV{sub max}) of regions of interest were measured at 60, 90, and 120 minutes. The SUV{sub max} slope between 60 and 120 minutes and changemore » of SUV{sub max} slope before and after 90 minutes were calculated. Data were analyzed by primary site and nodal site disease status using the Cox regression model and Wilcoxon rank sum test. Outcomes were based on pathologic and clinical follow-up. Results: A total of 84 patients were enrolled, with 79 primary and 43 nodal evaluable sites. Twenty-eight sites were interpreted as positive or equivocal (18 primary, 8 nodal, 2 distant) on 3-month 90-minute FDG-PET/CT. Median follow-up was 13.3 months. All measured SUV endpoints predicted recurrence. Change of SUV{sub max} slope after 90 minutes more accurately identified nonrecurrence in positive or equivocal sites than our current standard of SUV{sub max} ≥2.5 (P=.02). Conclusions: The positive predictive value of post-RT FDG-PET/CT may significantly improve using novel second derivative analysis of dynamic triphasic FDG-PET/CT SUV{sub max} slope, accurately distinguishing tumor from inflammation on positive and equivocal scans.« less

  18. Preliminary results of an attempt to predict over apron occupational exposure of cardiologists from cardiac fluoroscopy procedures based on DAP (dose area product) values.

    PubMed

    Toossi, Mohammad Taghi Bahreyni; Mehrpouyan, Mohammad; Nademi, Hossein; Fardid, Reza

    2015-03-01

    This study is an effort to propose a mathematical relation between the occupational exposure measured by a dosimeter worn on a lead apron in the chest region of a cardiologist and the dose area product (DAP) recorded by a meter attached to the X-ray tube. We aimed to determine factors by which DAP values attributed to patient exposure could be converted to the over-apron entrance surface air kerma incurred by cardiologists during an angiographic procedure. A Rando phantom representing a patient was exposed by an X-ray tube from 77 pre-defined directions. DAP value for each exposure angle was recorded. Cardiologist exposure was measured by a Radcal ionization chamber 10X5-180 positioned on a second phantom representing the physician. The exposure conversion factor was determined as the quotient of over apron exposure by DAP value. To verify the validity of this method, the over-apron exposure of a cardiologist was measured using the ionization chamber while performing coronary angiography procedures on 45 patients weighing on average 75 ± 5 kg. DAP values for the corresponding procedures were also obtained. Conversion factors obtained from phantom exposure were applied to the patient DAP values to calculate physician exposure. Mathematical analysis of our results leads us to conclude that a linear relationship exists between two sets of data: (a) cardiologist exposure measured directly by Radcal & DAP values recorded by the X-ray machine system (R (2) = 0.88), (b) specialist measured and estimated exposure derived from DAP values (R (2) = 0.91). The results demonstrate that cardiologist occupational exposure can be derived from patient data accurately.

  19. The excitation of OH by H2 revisited - I: fine-structure resolved rate coefficients

    NASA Astrophysics Data System (ADS)

    Kłos, J.; Ma, Q.; Dagdigian, P. J.; Alexander, M. H.; Faure, A.; Lique, F.

    2017-11-01

    Observations of OH in molecular clouds provide crucial constraints on both the physical conditions and the oxygen and water chemistry in these clouds. Accurate modelling of the OH emission spectra requires the calculation of rate coefficients for excitation of OH by collisions with the most abundant collisional partner in the molecular clouds, namely the H2 molecule. We report here theoretical calculations for the fine-structure excitation of OH by H2 (both para- and ortho-H2) using a recently developed highly accurate potential energy surface. Full quantum close coupling rate coefficients are provided for temperatures ranging from 10 to 150 K. Propensity rules are discussed and the new OH-H2 rate coefficients are compared to the earlier values that are currently used in astrophysical modelling. Significant differences were found: the new rate coefficients are significantly larger. As a first application, we simulate the excitation of OH in typical cold molecular clouds and star-forming regions. The new rate coefficients predict substantially larger line intensities. As a consequence, OH abundances derived from observations will be reduced from the values predicted by the earlier rate coefficients.

  20. Precise Absolute Astrometry from the VLBA Imaging and Polarimetry Survey at 5 GHz

    NASA Technical Reports Server (NTRS)

    Petrov, L.; Taylor, G. B.

    2011-01-01

    We present accurate positions for 857 sources derived from the astrometric analysis of 16 eleven-hour experiments from the Very Long Baseline Array imaging and polarimetry survey at 5 GHz (VIPS). Among the observed sources, positions of 430 objects were not previously determined at milliarcsecond-level accuracy. For 95% of the sources the uncertainty of their positions ranges from 0.3 to 0.9 mas, with a median value of 0.5 mas. This estimate of accuracy is substantiated by the comparison of positions of 386 sources that were previously observed in astrometric programs simultaneously at 2.3/8.6 GHz. Surprisingly, the ionosphere contribution to group delay was adequately modeled with the use of the total electron content maps derived from GPS observations and only marginally affected estimates of source coordinates.

  1. Ab Initio energetics of SiO bond cleavage.

    PubMed

    Hühn, Carolin; Erlebach, Andreas; Mey, Dorothea; Wondraczek, Lothar; Sierka, Marek

    2017-10-15

    A multilevel approach that combines high-level ab initio quantum chemical methods applied to a molecular model of a single, strain-free SiOSi bridge has been used to derive accurate energetics for SiO bond cleavage. The calculated SiO bond dissociation energy and the activation energy for water-assisted SiO bond cleavage of 624 and 163 kJ mol -1 , respectively, are in excellent agreement with values derived recently from experimental data. In addition, the activation energy for H 2 O-assisted SiO bond cleavage is found virtually independent of the amount of water molecules in the vicinity of the reaction site. The estimated reaction energy for this process including zero-point vibrational contribution is in the range of -5 to 19 kJ mol -1 . © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Delineating high-density areas in spatial Poisson fields from strip-transect sampling using indicator geostatistics: application to unexploded ordnance removal.

    PubMed

    Saito, Hirotaka; McKenna, Sean A

    2007-07-01

    An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.

  3. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  4. The refined physical properties of transiting exoplanetary system WASP-11/HAT-P-10

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiao-bin; Gu, Sheng-hong; Wang, Yi-bo

    2014-04-01

    The transiting exoplanetary system WASP-11/HAT-P-10 was observed using the CCD camera at Yunnan Observatories, China from 2008 to 2011, and four new transit light curves were obtained. Combined with published radial velocity measurements, the new transit light curves are analyzed along with available photometric data from the literature using the Markov Chain Monte Carlo technique, and the refined physical parameters of the system are derived, which are compatible with the results of two discovery groups, respectively. The planet mass is M{sub p} = 0.526 ± 0.019 M{sub J} , which is the same as West et al.'s value, and moremore » accurately, the planet radius R{sub p} = 0.999{sub −0.018}{sup +0.029} R{sub J} is identical to the value of Bakos et al. The new result confirms that the planet orbit is circular. By collecting 19 available mid-transit epochs with higher precision, we make an orbital period analysis for WASP-11b/HAT-P-10b, and derive a new value for its orbital period, P = 3.72247669 days. Through an (O – C) study based on these mid-transit epochs, no obvious transit timing variation signal can be found for this system during 2008-2012.« less

  5. Measurement of polyurethane foam - air partition coefficients for semivolatile organic compounds as a function of temperature: Application to passive air sampler monitoring.

    PubMed

    Francisco, Ana Paula; Harner, Tom; Eng, Anita

    2017-05-01

    Polyurethane foam - air partition coefficients (K PUF-air ) for 9 polycyclic aromatic hydrocarbons (PAHs), 10 alkyl-substituted PAHs, 4 organochlorine pesticides (OCPs) and dibenzothiophene were measured as a function of temperature over the range 5 °C-35 °C, using a generator column approach. Enthalpies of PUF-to-air transfer (ΔH PUF-air , kJ/mol) were determined from the slopes of log K PUF-air versus 1000/T (K), and have an average value of 81.2 ± 7.03 kJ/mol. The log K PUF-air values at 22 °C ranged from 4.99 to 7.25. A relationship for log K PUF-air versus log K OA was shown to agree with a previous relationship based on only polychlorinated biphenyls (PCBs) and derived from long-term indoor uptake study experiments. The results also confirm that the existing K OA -based model for predicting log K PUF-air values is accurate. This new information is important in the derivation of uptake profiles and effective air sampling volumes for PUF disk samplers so that results can be reported in units of concentration in air. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  6. EXPRESS: Accurate Measurement of the Optical Constants n and k for a Series of 57 Inorganic and Organic Liquids for Optical Modeling and Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, Tanya L.; Tonkyn, Russell G.; Danby, Tyler O.

    For optical modeling and other purposes, we have created a library of 57 liquids for which we have measured the complex optical constants n and k. These liquids vary in their nature, ranging in properties including chemical structure, optical band strength, volatility and viscosity. By obtaining the optical constants one can in principle model most optical phenomena in media and at interfaces including reflection, refraction and dispersion. Based on the original methods of J.E. Bertie et al.1 [1Bert1], we have developed improved protocols using multiple path lengths to determine the optical constants n/k for dozens of liquids, including inorganic, organicmore » and organophosphorus compounds. Detailed descriptions of the measurement and data reduction protocols are discussed; agreement of the derived optical constant n and k values with literature values are presented. We also present results using the n/k values as applied to an optical modeling scenario whereby the derived data are presented and tested for models of 1 µm and 100 µm layers for DMMP (dimethyl methyl phosphonate) on both metal (aluminum) and dielectric (soda lime glass) substrates to show substantial differences between the reflected signal from highly reflective substrates and less-reflective substrates.« less

  7. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  9. Discharge estimation for the Upper Brahmaputra River in the Tibetan Plateau using multi-source remote sensing data

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Long, D.; Du, M.; Hong, Y.

    2017-12-01

    River discharge is among the most important hydrological variables of hydrologists' concern, as it links drinking water supply, irrigation, and flood forecast together. Despite its importance, there are extremely limited gauging stations across most of alpine regions such as the Tibetan Plateau (TP) known as Asia's water towers. Use of remote sensing combined with partial in situ discharge measurements is a promising way of retrieving river discharge over ungauged or poorly gauged basins. Successful discharge estimation depends largely on accurate water width (area) and water level, but it is challenging to obtain these variables for alpine regions from a single satellite platform due to narrow river channels, complex terrain, and limited observations. Here, we used high-spatial-resolution images from Landsat series to derive water area, and satellite altimetry (Jason 2) to derive water level for the Upper Brahmaputra River (UBR) in the TP with narrow river width (less than 400 m in most occasions). We performed waveform retracking using a 50% Threshold and Ice-1 Combined algorithm (TIC) developed in this study to obtain accurate water level measurements. The discharge was estimated well using a range of derived formulas including the power function between water level and discharge, and that between water area and discharge suitable for the triangular cross-section around the Nuxia gauging station in the UBR. Results showed that the power function using Jason 2-derived water levels after performing waveform retracking performed best, showing an overall NSE value of 0.92. The proposed approach for remotely sensed river discharge is effective in the UBR and possibly other alpine rivers globally.

  10. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    USGS Publications Warehouse

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  11. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer's accuracy of 93% and a user's accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  12. Nature of Driving Force for Protein Folding: A Result From Analyzing the Statistical Potential

    NASA Astrophysics Data System (ADS)

    Li, Hao; Tang, Chao; Wingreen, Ned S.

    1997-07-01

    In a statistical approach to protein structure analysis, Miyazawa and Jernigan derived a 20×20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the Miyazawa-Jernigan matrix can be accurately reconstructed from its first two principal component vectors as Mij = C0+C1\\(qi+qj\\)+C2qiqj, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.

  13. Geometry of an outcrop-scale duplex in Devonian flysch, Maine

    USGS Publications Warehouse

    Bradley, D.C.; Bradley, L.M.

    1994-01-01

    We describe an outcrop-scale duplex consisting of 211 exposed repetitions of a single bed. The duplex marks an early Acadian (Middle Devonian) oblique thrust zone in the Lower Devonian flysch of northern Maine. Detailed mapping at a scale of 1:8 has enabled us to measure accurately parameters such as horse length and thickness, ramp angles and displacements; we compare these and derivative values with those of published descriptions of duplexes, and with theoretical models. Shortening estimates based on line balancing are consistently smaller than two methods of area balancing, suggesting that layer-parallel shortening preceded thrusting. ?? 1994.

  14. Modeling and control parameters for GMAW, short-circuiting transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, G.E.; DeLapp, D.R.; Barnett, R.J.

    1996-12-31

    Digital signal processing was used to analyze the electrical arc signals of the gas metal arc welding process with short-circuiting transfer. Among the features extracted were arc voltage and current (both average and peak values), short-circuiting frequency, arc period, shorting period, and the ratio of the arcing to shorting period. Additionally , a Joule heating model was derived which accurately predicted the melt-back distance during each short. The short-circuiting frequency, the ratio of the arc period to short periods, and the melt-back distance were found to be good indicators for monitoring and control of stable arc conditions.

  15. Numerical Solution of the Electron Transport Equation in the Upper Atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Mark Christopher; Holmes, Mark; Sailor, William C

    A new approach for solving the electron transport equation in the upper atmosphere is derived. The problem is a very stiff boundary value problem, and to obtain an accurate numerical solution, matrix factorizations are used to decouple the fast and slow modes. A stable finite difference method is applied to each mode. This solver is applied to a simplifieed problem for which an exact solution exists using various versions of the boundary conditions that might arise in a natural auroral display. The numerical and exact solutions are found to agree with each other to at least two significant digits.

  16. Computational active site analysis of molecular pathways to improve functional classification of enzymes.

    PubMed

    Ozyurt, A Sinem; Selby, Thomas L

    2008-07-01

    This study describes a method to computationally assess the function of homologous enzymes through small molecule binding interaction energy. Three experimentally determined X-ray structures and four enzyme models from ornithine cyclo-deaminase, alanine dehydrogenase, and mu-crystallin were used in combination with nine small molecules to derive a function score (FS) for each enzyme-model combination. While energy values varied for a single molecule-enzyme combination due to differences in the active sites, we observe that the binding energies for the entire pathway were proportional for each set of small molecules investigated. This proportionality of energies for a reaction pathway appears to be dependent on the amino acids in the active site and their direct interactions with the small molecules, which allows a function score (FS) to be calculated to assess the specificity of each enzyme. Potential of mean force (PMF) calculations were used to obtain the energies, and the resulting FS values demonstrate that a measurement of function may be obtained using differences between these PMF values. Additionally, limitations of this method are discussed based on: (a) larger substrates with significant conformational flexibility; (b) low homology enzymes; and (c) open active sites. This method should be useful in accurately predicting specificity for single enzymes that have multiple steps in their reactions and in high throughput computational methods to accurately annotate uncharacterized proteins based on active site interaction analysis. 2008 Wiley-Liss, Inc.

  17. Dose equivalent rate constants and barrier transmission data for nuclear medicine facility dose calculations and shielding design.

    PubMed

    Kusano, Maggie; Caldwell, Curtis B

    2014-07-01

    A primary goal of nuclear medicine facility design is to keep public and worker radiation doses As Low As Reasonably Achievable (ALARA). To estimate dose and shielding requirements, one needs to know both the dose equivalent rate constants for soft tissue and barrier transmission factors (TFs) for all radionuclides of interest. Dose equivalent rate constants are most commonly calculated using published air kerma or exposure rate constants, while transmission factors are most commonly calculated using published tenth-value layers (TVLs). Values can be calculated more accurately using the radionuclide's photon emission spectrum and the physical properties of lead, concrete, and/or tissue at these energies. These calculations may be non-trivial due to the polyenergetic nature of the radionuclides used in nuclear medicine. In this paper, the effects of dose equivalent rate constant and transmission factor on nuclear medicine dose and shielding calculations are investigated, and new values based on up-to-date nuclear data and thresholds specific to nuclear medicine are proposed. To facilitate practical use, transmission curves were fitted to the three-parameter Archer equation. Finally, the results of this work were applied to the design of a sample nuclear medicine facility and compared to doses calculated using common methods to investigate the effects of these values on dose estimates and shielding decisions. Dose equivalent rate constants generally agreed well with those derived from the literature with the exception of those from NCRP 124. Depending on the situation, Archer fit TFs could be significantly more accurate than TVL-based TFs. These results were reflected in the sample shielding problem, with unshielded dose estimates agreeing well, with the exception of those based on NCRP 124, and Archer fit TFs providing a more accurate alternative to TVL TFs and a simpler alternative to full spectral-based calculations. The data provided by this paper should assist in improving the accuracy and tractability of dose and shielding calculations for nuclear medicine facility design.

  18. Determination of a Degradation Constant for CYP3A4 by Direct Suppression of mRNA in a Novel Human Hepatocyte Model, HepatoPac.

    PubMed

    Ramsden, Diane; Zhou, Jin; Tweedie, Donald J

    2015-09-01

    Accurate determination of rates of de novo synthesis and degradation of cytochrome P450s (P450s) has been challenging. There is a high degree of variability in the multiple published values of turnover for specific P450s that is likely exacerbated by differences in methodologies. For CYP3A4, reported half-life values range from 10 to 140 hours. An accurate value for kdeg has been identified as a major limitation for prediction of drug interactions involving mechanism-based inhibition and/or induction. Estimation of P450 half-life from in vitro test systems, such as human hepatocytes, is complicated by differential decreased enzyme function over culture time, attenuation of the impact of enzyme loss through inclusion of glucocorticoids in media, and viability limitations over long-term culture times. HepatoPac overcomes some of these challenges by providing extended stability of enzymes (2.5 weeks in our hands). As such it is a unique tool for studying rates of enzyme degradation achieved through modulation of enzyme levels. CYP3A4 mRNA levels were rapidly depleted by >90% using either small interfering RNA or addition of interleukin-6, which allowed an estimation of the degradation rate constant for CYP3A protein over an incubation time of 96 hours. The degradation rate constant of 0.0240 ± 0.005 hour(-1) was reproducible in hepatocytes from five different human donors. These donors also reflected the overall population with respect to CYP3A5 genotype. This methodology can be applied to additional enzymes and may provide a more accurate in vitro derived kdeg value for predicting clinical drug-drug interaction outcomes. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.

  19. Comprehensive Peptide Ion Structure Studies Using Ion Mobility Techniques: Part 1. An Advanced Protocol for Molecular Dynamics Simulations and Collision Cross-Section Calculation.

    PubMed

    Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J Valentine, Stephen

    2017-05-01

    Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed. Graphical Abstract ᅟ.

  20. Comprehensive Peptide Ion Structure Studies Using Ion Mobility Techniques: Part 1. An Advanced Protocol for Molecular Dynamics Simulations and Collision Cross-Section Calculation

    NASA Astrophysics Data System (ADS)

    Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J. Valentine, Stephen

    2017-05-01

    Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed.

  1. Multivariable extrapolation of grand canonical free energy landscapes

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-12-01

    We derive an approach for extrapolating the free energy landscape of multicomponent systems in the grand canonical ensemble, obtained from flat-histogram Monte Carlo simulations, from one set of temperature and chemical potentials to another. This is accomplished by expanding the landscape in a Taylor series at each value of the order parameter which defines its macrostate phase space. The coefficients in each Taylor polynomial are known exactly from fluctuation formulas, which may be computed by measuring the appropriate moments of extensive variables that fluctuate in this ensemble. Here we derive the expressions necessary to define these coefficients up to arbitrary order. In principle, this enables a single flat-histogram simulation to provide complete thermodynamic information over a broad range of temperatures and chemical potentials. Using this, we also show how to combine a small number of simulations, each performed at different conditions, in a thermodynamically consistent fashion to accurately compute properties at arbitrary temperatures and chemical potentials. This method may significantly increase the computational efficiency of biased grand canonical Monte Carlo simulations, especially for multicomponent mixtures. Although approximate, this approach is amenable to high-throughput and data-intensive investigations where it is preferable to have a large quantity of reasonably accurate simulation data, rather than a smaller amount with a higher accuracy.

  2. Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.

    2017-12-01

    The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.

  3. Application of retention modelling to the simulation of separation of organic anions in suppressed ion chromatography.

    PubMed

    Zakaria, Philip; Dicinoski, Greg W; Ng, Boon Khing; Shellie, Robert A; Hanna-Brown, Melissa; Haddad, Paul R

    2009-09-18

    The ion-exchange separation of organic anions of varying molecular mass has been demonstrated using ion chromatography with isocratic, gradient and multi-step eluent profiles on commercially available columns with UV detection. A retention model derived previously for inorganic ions and based solely on electrostatic interactions between the analytes and the stationary phase was applied. This model was found to accurately describe the observed elution of all the anions under isocratic, gradient and multi-step eluent conditions. Hydrophobic interactions, although likely to be present to varying degrees, did not limit the applicability of the ion-exchange retention model. Various instrumental configurations were investigated to overcome problems associated with the use of organic modifiers in the eluent which caused compatibility issues with the electrolytically derived, and subsequently suppressed, eluent. The preferred configuration allowed the organic modifier stream to bypass the eluent generator, followed by subsequent mixing before entering the injection valve and column. Accurate elution prediction was achieved even when using 5-step eluent profiles with errors in retention time generally being less than 1% relative standard deviation (RSD) and all being less than 5% RSD. Peak widths for linear gradient separations were also modelled and showed good agreement with experimentally determined values.

  4. WR 20a Is an Eclipsing Binary: Accurate Determination of Parameters for an Extremely Massive Wolf-Rayet System

    NASA Astrophysics Data System (ADS)

    Bonanos, A. Z.; Stanek, K. Z.; Udalski, A.; Wyrzykowski, L.; Żebruń, K.; Kubiak, M.; Szymański, M. K.; Szewczyk, O.; Pietrzyński, G.; Soszyński, I.

    2004-08-01

    We present a high-precision I-band light curve for the Wolf-Rayet binary WR 20a, obtained as a subproject of the Optical Gravitational Lensing Experiment. Rauw et al. have recently presented spectroscopy for this system, strongly suggesting extremely large minimum masses of 70.7+/-4.0 and 68.8+/-3.8 Msolar for the component stars of the system, with the exact values depending strongly on the period of the system. We detect deep eclipses of about 0.4 mag in the light curve of WR 20a, confirming and refining the suspected period of P=3.686 days and deriving an inclination angle of i=74.5d+/-2.0d. Using these photometric data and the radial velocity data of Rauw et al., we derive the masses for the two components of WR 20a to be 83.0+/-5.0 and 82.0+/-5.0 Msolar. Therefore, WR 20a is confirmed to consist of two extremely massive stars and to be the most massive binary known with an accurate mass determination. Based on observations obtained with the 1.3 m Warsaw telescope at Las Campanas Observatory, which is operated by the Carnegie Institute of Washington.

  5. Subject-specific bone attenuation correction for brain PET/MR: can ZTE-MRI substitute CT scan accurately?

    PubMed

    Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude

    2017-09-21

    In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.

  6. Subject-specific bone attenuation correction for brain PET/MR: can ZTE-MRI substitute CT scan accurately?

    NASA Astrophysics Data System (ADS)

    Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude

    2017-10-01

    In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.

  7. Isotopic ecology of coyotes from scat and road kill carcasses: A complementary approach to feeding experiments

    PubMed Central

    Koch, Paul L.

    2017-01-01

    Scat is frequently used to study animal diets because it is easy to find and collect, but one concern is that gross fecal analysis (GFA) techniques exaggerate the importance of small-bodied prey to mammalian mesopredator diets. To capitalize on the benefits of scat, we suggest the analysis of scat carbon and nitrogen isotope values (δ13C and δ15N). This technique offers researchers a non-invasive method to gather short-term dietary information. We conducted three interrelated studies to validate the use of isotopic values from coyote scat: 1) we determined tissue-to-tissue apparent C and N isotope enrichment factors (ε13* and ε15*) for coyotes from road kill animals (n = 4); 2) we derived diet-to-scat isotope discrimination factors for coyotes; and 3) we used field collected coyote scats (n = 12) to compare estimates of coyote dietary proportions from stable isotope mixing models with estimates from two GFA techniques. Scat consistently had the lowest δ13C and δ15N values among the tissues sampled. We derived a diet-to-scat Δ13C value of -1.5‰ ± 1.6‰ and Δ15N value of 2.3‰ ± 1.3‰ for coyotes. Coyote scat δ13C and δ15N values adjusted for discrimination consistently plot within the isotopic mixing space created by known dietary items. In comparison with GFA results, we found that mixing model estimates of coyote dietary proportions de-emphasize the importance of small-bodied prey. Coyote scat δ13C and δ15N values therefore offer a relatively quick and non-invasive way to gain accurate dietary information. PMID:28369133

  8. Isotopic ecology of coyotes from scat and road kill carcasses: A complementary approach to feeding experiments.

    PubMed

    Reid, Rachel E B; Koch, Paul L

    2017-01-01

    Scat is frequently used to study animal diets because it is easy to find and collect, but one concern is that gross fecal analysis (GFA) techniques exaggerate the importance of small-bodied prey to mammalian mesopredator diets. To capitalize on the benefits of scat, we suggest the analysis of scat carbon and nitrogen isotope values (δ13C and δ15N). This technique offers researchers a non-invasive method to gather short-term dietary information. We conducted three interrelated studies to validate the use of isotopic values from coyote scat: 1) we determined tissue-to-tissue apparent C and N isotope enrichment factors (ε13* and ε15*) for coyotes from road kill animals (n = 4); 2) we derived diet-to-scat isotope discrimination factors for coyotes; and 3) we used field collected coyote scats (n = 12) to compare estimates of coyote dietary proportions from stable isotope mixing models with estimates from two GFA techniques. Scat consistently had the lowest δ13C and δ15N values among the tissues sampled. We derived a diet-to-scat Δ13C value of -1.5‰ ± 1.6‰ and Δ15N value of 2.3‰ ± 1.3‰ for coyotes. Coyote scat δ13C and δ15N values adjusted for discrimination consistently plot within the isotopic mixing space created by known dietary items. In comparison with GFA results, we found that mixing model estimates of coyote dietary proportions de-emphasize the importance of small-bodied prey. Coyote scat δ13C and δ15N values therefore offer a relatively quick and non-invasive way to gain accurate dietary information.

  9. Accurate measurements and temperature dependence of the water vapor self-continuum absorption in the 2.1 μm atmospheric window

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ventrillard, I.; Romanini, D.; Mondelain, D.

    In spite of its importance for the evaluation of the Earth radiative budget, thus for climate change, very few measurements of the water vapor continuum are available in the near infrared atmospheric windows especially at temperature conditions relevant for our atmosphere. In addition, as a result of the difficulty to measure weak broadband absorption signals, the few available measurements show large disagreements. We report here accurate measurements of the water vapor self-continuum absorption in the 2.1 μm window by Optical Feedback Cavity Enhanced Absorption Spectroscopy (OF-CEAS) for two spectral points located at the low energy edge and at the centermore » of the 2.1 μm transparency window, at 4302 and 4723 cm{sup −1}, respectively. Self-continuum cross sections, C{sub S}, were retrieved with a few % relative uncertainty, from the quadratic dependence of the spectrum base line level measured as a function of water vapor pressure, between 0 and 16 Torr. At 296 K, the C{sub S} value at 4302 cm{sup −1} is found 40% higher than predicted by the MT-CKD V2.5 model, while at 4723 cm{sup −1}, our value is 5 times larger than the MT-CKD value. On the other hand, these OF-CEAS C{sub S} values are significantly smaller than recent measurements by Fourier transform spectroscopy at room temperature. The temperature dependence of the self-continuum cross sections was also investigated for temperatures between 296 K and 323 K (23-50 °C). The derived temperature variation is found to be similar to that derived from previous Fourier transform spectrometer (FTS) measurements performed at higher temperatures, between 350 K and 472 K. The whole set of measurements spanning the 296-472 K temperature range follows a simple exponential law in 1/T with a slope close to the dissociation energy of the water dimer, D{sub 0} ≈ 1100 cm{sup −1}.« less

  10. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    PubMed

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating patient organ doses, Monte Carlo simulations were performed by creating voxelized models of each patient, identifying key organs and incorporating tube current values into the simulations to estimate dose to the lungs and breasts (females only) for chest scans and the liver, kidney, and spleen for abdomen/pelvis scans. Organ doses from simulations using the actual tube current values were compared to those using each of the estimated tube current values (actual-topo and sim-topo). When compared to the actual tube current values, the average error for tube current values estimated from the actual topogram (actual-topo) and simulated topogram (sim-topo) was 3.9% and 5.8% respectively. For Monte Carlo simulations of chest CT exams using the actual tube current values and estimated tube current values (based on the actual-topo and sim-topo methods), the average differences for lung and breast doses ranged from 3.4% to 6.6%. For abdomen/pelvis exams, the average differences for liver, kidney, and spleen doses ranged from 4.2% to 5.3%. Strong agreement between organ doses estimated using actual and estimated tube current values provides validation of both methods for estimating tube current values based on data provided in the topogram or simulated from image data. © 2017 American Association of Physicists in Medicine.

  11. SeaWiFS technical report series. Volume 26: Results of the SeaWiFS Data Analysis Round-Robin, July 1994 (DARR-1994)

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Siegel, David A.; Obrien, Margaret C.; Sorensen, Jen C.; Konnoff, Daniel A.; Brody, Eric A.; Mueller, James L.; Davis, Curtiss O.; Rhea, W. Joseph

    1995-01-01

    The accurate determination of upper ocean apparent optical properties (AOP's) is essential for the vicarious calibration of the sea-viewing wide field-of-view sensor (SeaWiFS) instrument and the validation of the derived data products. To evaluate the role that data analysis methods have upon values of derived AOP's, the first Data Analysis Round-Robin (DARR-94) workshop was sponsored by the SeaWiFS Project during 21-23 July, 1994. The focus of this intercomparison study was the estimation of the downwelling irradiance spectrum just beneath the sea surface, E(sub d)(0(sup -), lambda); the upwelling nadir radiance just beneath the sea surface, L(sub u)(0(sup -), lambda); and the vertical profile of the diffuse attenuation coefficient spectrum, K(sub d)(z, lambda). In the results reported here, different methodologies from four research groups were applied to an identical set of 10 spectroradiometry casts in order to evaluate the degree to which data analysis methods influence AOP estimation, and whether any general improvements can be made. The overall results of DARR-94 are presented in Chapter 1 and the individual methods of the four groups are presented in Chapters 2-5. The DARR-94 results do not show a clear winner among data analysis methods evaluated. It is apparent, however, that some degree of outlier rejection is required in order to accurately estimate L(sub u)(0(sup -), lambda) or E(sub d)(0(sup -), lambda). Furthermore, the calculation, evaluation and exploitation of confidence intervals for the AOP determinations needs to be explored. That is, the SeaWiFS calibration and validation problem should be recast in statistical terms where the in situ AOP values are statistical estimates with known confidence intervals.

  12. Complete fourier direct magnetic resonance imaging (CFD-MRI) for diffusion MRI

    PubMed Central

    Özcan, Alpay

    2013-01-01

    The foundation for an accurate and unifying Fourier-based theory of diffusion weighted magnetic resonance imaging (DW–MRI) is constructed by carefully re-examining the first principles of DW–MRI signal formation and deriving its mathematical model from scratch. The derivations are specifically obtained for DW–MRI signal by including all of its elements (e.g., imaging gradients) using complex values. Particle methods are utilized in contrast to conventional partial differential equations approach. The signal is shown to be the Fourier transform of the joint distribution of number of the magnetic moments (at a given location at the initial time) and magnetic moment displacement integrals. In effect, the k-space is augmented by three more dimensions, corresponding to the frequency variables dual to displacement integral vectors. The joint distribution function is recovered by applying the Fourier transform to the complete high-dimensional data set. In the process, to obtain a physically meaningful real valued distribution function, phase corrections are applied for the re-establishment of Hermitian symmetry in the signal. Consequently, the method is fully unconstrained and directly presents the distribution of displacement integrals without any assumptions such as symmetry or Markovian property. The joint distribution function is visualized with isosurfaces, which describe the displacement integrals, overlaid on the distribution map of the number of magnetic moments with low mobility. The model provides an accurate description of the molecular motion measurements via DW–MRI. The improvement of the characterization of tissue microstructure leads to a better localization, detection and assessment of biological properties such as white matter integrity. The results are demonstrated on the experimental data obtained from an ex vivo baboon brain. PMID:23596401

  13. Activity concentration measurements using a conjugate gradient (Siemens xSPECT) reconstruction algorithm in SPECT/CT.

    PubMed

    Armstrong, Ian S; Hoffmann, Sandra A

    2016-11-01

    The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.

  14. Quantitative Skeletal Muscle MRI: Part 1, Derived T2 Fat Map in Differentiation Between Boys With Duchenne Muscular Dystrophy and Healthy Boys.

    PubMed

    Johnston, Jennifer H; Kim, Hee Kyung; Merrow, Arnold C; Laor, Tal; Serai, Suraj; Horn, Paul S; Kim, Dong Hoon; Wong, Brenda L

    2015-08-01

    The purpose of this study was to validate derived T2 maps as an objective measure of muscular fat for discrimination between boys with Duchenne muscular dystrophy (DMD) and healthy boys. Forty-two boys with DMD (mean age, 9.9 years) and 31 healthy boys (mean age, 11.4 years) were included in the study. Age, body mass index, and clinical function scale grade were evaluated. T1-weighted MR images and T2 maps with and without fat suppression were obtained. Fatty infiltration was graded 0-4 on T1-weighted images, and derived T2 fat values (difference between mean T2 values from T2 maps with and without fat suppression) of the gluteus maximus and vastus lateralis muscles were calculated. Group comparisons were performed. The upper limit of the 95% reference interval of T2 fat values from the control group was applied. There was no significant difference in age or body mass index between groups. All healthy boys and 19 boys (45.2%) with DMD had a normal clinical function scale grade. Grade 1 fatty infiltration was seen in 90.3% (gluteus maximus) and 71.0% (vastus lateralis) of healthy boys versus 33.3% (gluteus maximus) and 52.4% (vastus lateralis) of boys with DMD. T2 fat values of boys with DMD were significantly longer than in the control group (p < 0.001). Using a 95% reference interval for healthy boys for the gluteus maximus (28.3 milliseconds) allowed complete separation from boys with DMD (100% sensitivity, 100% specificity), whereas the values for the vastus lateralis (7.28 milliseconds) resulted in 83.3% sensitivity and 100% specificity. Measurement of muscular fat with T2 maps is accurate for differentiating boys with DMD from healthy boys.

  15. An accurate computational method for the diffusion regime verification

    NASA Astrophysics Data System (ADS)

    Zhokh, Alexey A.; Strizhak, Peter E.

    2018-04-01

    The diffusion regime (sub-diffusive, standard, or super-diffusive) is defined by the order of the derivative in the corresponding transport equation. We develop an accurate computational method for the direct estimation of the diffusion regime. The method is based on the derivative order estimation using the asymptotic analytic solutions of the diffusion equation with the integer order and the time-fractional derivatives. The robustness and the computational cheapness of the proposed method are verified using the experimental methane and methyl alcohol transport kinetics through the catalyst pellet.

  16. Quantitative imaging of peripheral trabecular bone microarchitecture using MDCT.

    PubMed

    Chen, Cheng; Zhang, Xiaoliu; Guo, Junfeng; Jin, Dakai; Letuchy, Elena M; Burns, Trudy L; Levy, Steven M; Hoffman, Eric A; Saha, Punam K

    2018-01-01

    Osteoporosis associated with reduced bone mineral density (BMD) and microarchitectural changes puts patients at an elevated risk of fracture. Modern multidetector row CT (MDCT) technology, producing high spatial resolution at increasingly lower dose radiation, is emerging as a viable modality for trabecular bone (Tb) imaging. Wide variation in CT scanners raises concerns of data uniformity in multisite and longitudinal studies. A comprehensive cadaveric study was performed to evaluate MDCT-derived Tb microarchitectural measures. A human pilot study was performed comparing continuity of Tb measures estimated from two MDCT scanners with significantly different image resolution features. Micro-CT imaging of cadaveric ankle specimens (n=25) was used to examine the validity of MDCT-derived Tb microarchitectural measures. Repeat scan reproducibility of MDCT-based Tb measures and their ability to predict mechanical properties were examined. To assess multiscanner data continuity of Tb measures, the distal tibias of 20 volunteers (age:26.2±4.5Y,10F) were scanned using the Siemens SOMATOM Definition Flash and the higher resolution Siemens SOMATOM Force scanners with an average 45-day time gap between scans. The correlation of Tb measures derived from the two scanners over 30% and 60% peel regions at the 4% to 8% of distal tibia was analyzed. MDCT-based Tb measures characterizing bone network area density, plate-rod microarchitecture, and transverse trabeculae showed good correlations (r∈0.85,0.92) with the gold standard micro-CT-derived values of matching Tb measures. However, other MDCT-derived Tb measures characterizing trabecular thickness and separation, erosion index, and structure model index produced weak correlation (r<0.8) with their micro-CT-derived values. Most MDCT Tb measures were found repeatable (ICC∈0.94,0.98). The Tb plate-width measure showed a strong correlation (r = 0.89) with experimental yield stress, while the transverse trabecular measure produced the highest correlation (r = 0.81) with Young's modulus. The data continuity experiment showed that, despite significant differences in image resolution between two scanners (10% MTF along xy-plane and z-direction - Flash: 16.2 and 17.9 lp/cm; Force: 24.8 and 21.0 lp/cm), most Tb measures had high Pearson correlations (r > 0.95) between values estimated from the two scanners. Relatively lower correlation coefficients were observed for the bone network area density (r = 0.91) and Tb separation (r = 0.93) measures. Most MDCT-derived Tb microarchitectural measures are reproducible and their values derived from two scanners strongly correlate with each other as well as with bone strength. This study has highlighted those MDCT-derived measures which show the greatest promise for characterization of bone network area density, plate-rod and transverse trabecular distributions with a good correlation (r ≥ 0.85) compared with their micro-CT-derived values. At the same time, other measures representing trabecular thickness and separation, erosion index, and structure model index produced weak correlations (r < 0.8) with their micro-CT-derived values, failing to accurately portray the projected trabecular microarchitectural features. Strong correlations of Tb measures estimated from two scanners suggest that image data from different scanners can be used successfully in multisite and longitudinal studies with linear calibration required for some measures. In summary, modern MDCT scanners are suitable for effective quantitative imaging of peripheral Tb microarchitecture if care is taken to focus on appropriate quantitative metrics. © 2017 American Association of Physicists in Medicine.

  17. Development and validation of a novel predictive scoring model for microvascular invasion in patients with hepatocellular carcinoma.

    PubMed

    Zhao, Hui; Hua, Ye; Dai, Tu; He, Jian; Tang, Min; Fu, Xu; Mao, Liang; Jin, Huihan; Qiu, Yudong

    2017-03-01

    Microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC) cannot be accurately predicted preoperatively. This study aimed to establish a predictive scoring model of MVI in solitary HCC patients without macroscopic vascular invasion. A total of 309 consecutive HCC patients who underwent curative hepatectomy were divided into the derivation (n=206) and validation cohort (n=103). A predictive scoring model of MVI was established according to the valuable predictors in the derivation cohort based on multivariate logistic regression analysis. The performance of the predictive model was evaluated in the derivation and validation cohorts. Preoperative imaging features on CECT, such as intratumoral arteries, non-nodular type of HCC and absence of radiological tumor capsule were independent predictors for MVI. The predictive scoring model was established according to the β coefficients of the 3 predictors. Area under receiver operating characteristic (AUROC) of the predictive scoring model was 0.872 (95% CI, 0.817-0.928) and 0.856 (95% CI, 0.771-0.940) in the derivation and validation cohorts. The positive and negative predictive values were 76.5% and 88.0% in the derivation cohort and 74.4% and 88.3% in the validation cohort. The performance of the model was similar between the patients with tumor size ≤5cm and >5cm in AUROC (P=0.910). The predictive scoring model based on intratumoral arteries, non-nodular type of HCC, and absence of the radiological tumor capsule on preoperative CECT is of great value in the prediction of MVI regardless of tumor size. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Use of a Principal Components Analysis for the Generation of Daily Time Series.

    NASA Astrophysics Data System (ADS)

    Dreveton, Christine; Guillou, Yann

    2004-07-01

    A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.


  19. Two Dimensional Symmetric Correlation Functions of the S Operator and Two Dimensional Fourier Transforms: Considering the Line Coupling for P and R Lines of Linear Molecules

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Boulet, C.; Tipping, R. H.

    2014-01-01

    The refinement of the Robert-Bonamy (RB) formalism by considering the line coupling for isotropic Raman Q lines of linear molecules developed in our previous study [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)] has been extended to infrared P and R lines. In these calculations, the main task is to derive diagonal and off-diagonal matrix elements of the Liouville operator iS1 - S2 introduced in the formalism. When one considers the line coupling for isotropic Raman Q lines where their initial and final rotational quantum numbers are identical, the derivations of off-diagonal elements do not require extra correlation functions of the ^S operator and their Fourier transforms except for those used in deriving diagonal elements. In contrast, the derivations for infrared P and R lines become more difficult because they require a lot of new correlation functions and their Fourier transforms. By introducing two dimensional correlation functions labeled by two tensor ranks and making variable changes to become even functions, the derivations only require the latters' two dimensional Fourier transforms evaluated at two modulation frequencies characterizing the averaged energy gap and the frequency detuning between the two coupled transitions. With the coordinate representation, it is easy to accurately derive these two dimensional correlation functions. Meanwhile, by using the sampling theory one is able to effectively evaluate their two dimensional Fourier transforms. Thus, the obstacles in considering the line coupling for P and R lines have been overcome. Numerical calculations have been carried out for the half-widths of both the isotropic Raman Q lines and the infrared P and R lines of C2H2 broadened by N2. In comparison with values derived from the RB formalism, new calculated values are significantly reduced and become closer to measurements.

  20. Two dimensional symmetric correlation functions of the S-circumflex operator and two dimensional Fourier transforms: Considering the line coupling for P and R lines of linear molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Q.; Boulet, C.; Tipping, R. H.

    The refinement of the Robert-Bonamy (RB) formalism by considering the line coupling for isotropic Raman Q lines of linear molecules developed in our previous study [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)] has been extended to infrared P and R lines. In these calculations, the main task is to derive diagonal and off-diagonal matrix elements of the Liouville operator iS{sub 1} − S{sub 2} introduced in the formalism. When one considers the line coupling for isotropic Raman Q lines where their initial and final rotational quantum numbers are identical, the derivations of off-diagonalmore » elements do not require extra correlation functions of the S-circumflex operator and their Fourier transforms except for those used in deriving diagonal elements. In contrast, the derivations for infrared P and R lines become more difficult because they require a lot of new correlation functions and their Fourier transforms. By introducing two dimensional correlation functions labeled by two tensor ranks and making variable changes to become even functions, the derivations only require the latters’ two dimensional Fourier transforms evaluated at two modulation frequencies characterizing the averaged energy gap and the frequency detuning between the two coupled transitions. With the coordinate representation, it is easy to accurately derive these two dimensional correlation functions. Meanwhile, by using the sampling theory one is able to effectively evaluate their two dimensional Fourier transforms. Thus, the obstacles in considering the line coupling for P and R lines have been overcome. Numerical calculations have been carried out for the half-widths of both the isotropic Raman Q lines and the infrared P and R lines of C{sub 2}H{sub 2} broadened by N{sub 2}. In comparison with values derived from the RB formalism, new calculated values are significantly reduced and become closer to measurements.« less

  1. Sound transmission loss of composite sandwich panels

    NASA Astrophysics Data System (ADS)

    Zhou, Ran

    Light composite sandwich panels are increasingly used in automobiles, ships and aircraft, because of the advantages they offer of high strength-to-weight ratios. However, the acoustical properties of these light and stiff structures can be less desirable than those of equivalent metal panels. These undesirable properties can lead to high interior noise levels. A number of researchers have studied the acoustical properties of honeycomb and foam sandwich panels. Not much work, however, has been carried out on foam-filled honeycomb sandwich panels. In this dissertation, governing equations for the forced vibration of asymmetric sandwich panels are developed. An analytical expression for modal densities of symmetric sandwich panels is derived from a sixth-order governing equation. A boundary element analysis model for the sound transmission loss of symmetric sandwich panels is proposed. Measurements of the modal density, total loss factor, radiation loss factor, and sound transmission loss of foam-filled honeycomb sandwich panels with different configurations and thicknesses are presented. Comparisons between the predicted sound transmission loss values obtained from wave impedance analysis, statistical energy analysis, boundary element analysis, and experimental values are presented. The wave impedance analysis model provides accurate predictions of sound transmission loss for the thin foam-filled honeycomb sandwich panels at frequencies above their first resonance frequencies. The predictions from the statistical energy analysis model are in better agreement with the experimental transmission loss values of the sandwich panels when the measured radiation loss factor values near coincidence are used instead of the theoretical values for single-layer panels. The proposed boundary element analysis model provides more accurate predictions of sound transmission loss for the thick foam-filled honeycomb sandwich panels than either the wave impedance analysis model or the statistical energy analysis model.

  2. Multipole correction of atomic monopole models of molecular charge distribution. I. Peptides

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Keller, D. A.; Ornstein, R. L.; Rein, R.

    1993-01-01

    The defects in atomic monopole models of molecular charge distribution have been analyzed for several model-blocked peptides and compared with accurate quantum chemical values. The results indicate that the angular characteristics of the molecular electrostatic potential around functional groups capable of forming hydrogen bonds can be considerably distorted within various models relying upon isotropic atomic charges only. It is shown that these defects can be corrected by augmenting the atomic point charge models by cumulative atomic multipole moments (CAMMs). Alternatively, sets of off-center atomic point charges could be automatically derived from respective multipoles, providing approximately equivalent corrections. For the first time, correlated atomic multipoles have been calculated for N-acetyl, N'-methylamide-blocked derivatives of glycine, alanine, cysteine, threonine, leucine, lysine, and serine using the MP2 method. The role of the correlation effects in the peptide molecular charge distribution are discussed.

  3. On the degrees of freedom of reduced-rank estimators in multivariate regression

    PubMed Central

    Mukherjee, A.; Chen, K.; Wang, N.; Zhu, J.

    2015-01-01

    Summary We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example. PMID:26702155

  4. A relativistic coupled-cluster interaction potential and rovibrational constants for the xenon dimer

    NASA Astrophysics Data System (ADS)

    Jerabek, Paul; Smits, Odile; Pahl, Elke; Schwerdtfeger, Peter

    2018-01-01

    An accurate potential energy curve has been derived for the xenon dimer using state-of-the-art relativistic coupled-cluster theory up to quadruple excitations accounting for both basis set superposition and incompleteness errors. The data obtained is fitted to a computationally efficient extended Lennard-Jones potential form and to a modified Tang-Toennies potential function treating the short- and long-range part separately. The vibrational spectrum of Xe2 obtained from a numerical solution of the rovibrational Schrödinger equation and subsequently derived spectroscopic constants are in excellent agreement with experimental values. We further present solid-state calculations for xenon using a static many-body expansion up to fourth-order in the xenon interaction potential including dynamic effects within the Einstein approximation. Again we find very good agreement with the experimental (face-centred cubic) lattice constant and cohesive energy.

  5. Simple formula for the surface area of the body and a simple model for anthropometry.

    PubMed

    Reading, Bruce D; Freeman, Brian

    2005-03-01

    The body surface area (BSA) of any adult, when derived from the arithmetic mean of the different values calculated from four independent accepted formulae, can be expressed accurately in Systeme International d'Unites (SI) units by the simple equation BSA = 1/6(WH)0.5, where W is body weight in kg, H is body height in m, and BSA is in m2. This formula, which is derived in part by modeling the body as a simple solid of revolution or a prolate spheroid (i.e., a stretched ellipsoid of revolution) gives students, teachers, and clinicians a simple rule for the rapid estimation of surface area using rational units. The formula was tested independently for human subjects by using it to predict body volume and then comparing this prediction against the actual volume measured by Archimedes' principle. Copyright 2005 Wiley-Liss, Inc.

  6. Ellipsoidal corrections for geoid undulation computations using gravity anomalies in a cap

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1981-01-01

    Ellipsoidal correction terms have been derived for geoid undulation computations when the Stokes equation using gravity anomalies in a cap is combined with potential coefficient information. The correction terms are long wavelength and depend on the cap size in which its gravity anomalies are given. Using the regular Stokes equation, the maximum correction for a cap size of 20 deg is -33 cm, which reduces to -27 cm when the Stokes function is modified by subtracting the value of the Stokes function at the cap radius. Ellipsoidal correction terms were also derived for the well-known Marsh/Chang geoids. When no gravity was used, the correction could reach 101 cm, while for a cap size of 20 deg the maximum correction was -45 cm. Global correction maps are given for a number of different cases. For work requiring accurate geoid computations these correction terms should be applied.

  7. Photosynthesis, Earth System Models and the Arctic

    NASA Astrophysics Data System (ADS)

    Rogers, A.; Sloan, V. L.; Xu, C.; Wullschleger, S. D.

    2013-12-01

    The primary goal of Earth System Models (ESMs) is to improve understanding and projection of future global change. In order to do this they must accurately represent the huge carbon fluxes associated with the terrestrial carbon cycle. Photosynthetic CO2 uptake is the largest of these fluxes, and is well described by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. Most ESMs use a derivation of the FvCB model to calculate gross primary productivity (GPP). One of the key parameters required by the FvCB model is an estimate of the maximum rate of carboxylation by the enzyme Rubisco (Vc,max). In ESMs the parameter Vc,max is usually fixed for a given plant functional type (PFT). Although Arctic GPP a small flux relative to global GPP, uncertainty is large. Only four ESMs currently have an explicit Arctic PFT and the data used to derive Vc,max for the Arctic PFT in these models relies on small data sets and unjustified assumptions. As part of a multidisciplinary project to improve the representation of the Arctic in ESMs (Next Generation Ecosystem Experiments - Arctic) we examined the derivation of Vc,max in current Arctic PFTs and estimated Vc,max for 12 species representing both dominant vegetation and key PFTs growing on the Barrow Environmental Observatory, Barrow, AK. The values of Vc,max currently used to represent Arctic PFTs in ESMs are 70% lower than the values we measured in these species. Separate measurements of CO2 assimilation (A) made at ambient conditions were compared with A modeled using the Vc,max values we measured in Barrow and those used by the ESMs. The A modeled with the Vc,max values used by the ESMs was 80% lower than the observed A. When our measured Vc,max values were used, modeled A was within 5% of observed A. Examination of the derivation of Vc,max in ESMs identified that the cause of the relatively low Vc,max value was the result of underestimating both the leaf N content and the investment of that N in Rubisco. Here we have identified possible improvements to the derivation of Vc,max in ESMs and provided new physiological characterization of Arctic species that is mechanistically consistent with observed leaf level CO2 uptake. These data suggest that the Arctic tundra has a much greater capacity for CO2 uptake than is currently represented in ESMs. Our parameterization can be used in future model projections to improve representation of the Arctic landscape in ESMs.

  8. Optimal Waist-to-Height Ratio Values for Cardiometabolic Risk Screening in an Ethnically Diverse Sample of South African Urban and Rural School Boys and Girls

    PubMed Central

    Matsha, Tandi E.; Kengne, Andre-Pascal; Yako, Yandiswa Y.; Hon, Gloudina M.; Hassan, Mogamat S.; Erasmus, Rajiv T.

    2013-01-01

    Background The proposed waist-to-height ratio (WHtR) cut-off of 0.5 is less optimal for cardiometabolic risk screening in children in many settings. The purpose of this study was to determine the optimal WHtR for children from South Africa, and investigate variations by gender, ethnicity and residence in the achieved value. Methods Metabolic syndrome (MetS) components were measured in 1272 randomly selected learners, aged 10–16 years, comprising of 446 black Africans, 696 mixed-ancestry and 130 Caucasians. The Youden’s index and the closest-top-left (CTL) point approaches were used to derive WHtR cut-offs for diagnosing any two MetS components, excluding the waist circumference. Results The two approaches yielded similar cut-off in girls, 0.465 (sensitivity 50.0, specificity 69.5), but two different values in boys, 0.455 (42.9, 88.4) and 0.425 (60.3, 67.7) based on the Youden’s index and the CTL point, respectively. Furthermore, WHtR cut-off values derived differed substantially amongst the regions and ethnic groups investigated, whereby the highest cut-off was observed in semi-rural and white children, respectively, Youden’s index0.505 (31.6, 87.1) and CTL point 0.475 (44.4, 75.9). Conclusion The WHtR cut-off of 0.5 is less accurate for screening cardiovascular risk in South African children. The optimal value in this setting is likely gender and ethnicity-specific and sensitive to urbanization. PMID:23967160

  9. Quantitative determination of ambroxol in tablets by derivative UV spectrophotometric method and HPLC.

    PubMed

    Dinçer, Zafer; Basan, Hasan; Göger, Nilgün Günden

    2003-04-01

    A derivative UV spectrophotometric method for the determination of ambroxol in tablets was developed. Determination of ambroxol in tablets was conducted by using first-order derivative UV spectrophotometric method at 255 nm (n = 5). Standards for the calibration graph ranging from 5.0 to 35.0 microg/ml were prepared from stock solution. The proposed method was accurate with 98.6+/-0.4% recovery value and precise with coefficient of variation (CV) of 1.22. These results were compared with those obtained by reference methods, zero-order UV spectrophotometric method and reversed-phase high-performance liquid chromatography (HPLC) method. A reversed-phase C(18) column with aqueous phosphate (0.01 M)-acetonitrile-glacial acetic acid (59:40:1, v/v/v) (pH 3.12) mobile phase was used and UV detector was set to 252 nm. Calibration solutions used in HPLC were ranging from 5.0 to 20.0 microg/ml. Results obtained by derivative UV spectrophotometric method was comparable to those obtained by reference methods, zero-order UV spectrophotometric method and HPLC, as far as ANOVA test, F(calculated) = 0.762 and F(theoretical) = 3.89, was concerned. Copyright 2003 Elsevier Science B.V.

  10. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  11. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  12. A comparison of airborne wake vortex detection measurements with values predicted from potential theory

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    1991-01-01

    An analysis of flight measurements made near a wake vortex was conducted to explore the feasibility of providing a pilot with useful wake avoidance information. The measurements were made with relatively low cost flow and motion sensors on a light airplane flying near the wake vortex of a turboprop airplane weighing approximately 90000 lbs. Algorithms were developed which removed the response of the airplane to control inputs from the total airplane response and produced parameters which were due solely to the flow field of the vortex. These parameters were compared with values predicted by potential theory. The results indicated that the presence of the vortex could be detected by a combination of parameters derived from the simple sensors. However, the location and strength of the vortex cannot be determined without additional and more accurate sensors.

  13. Accurate derivation of heart rate variability signal for detection of sleep disordered breathing in children.

    PubMed

    Chatlapalli, S; Nazeran, H; Melarkod, V; Krishnam, R; Estrada, E; Pamula, Y; Cabrera, S

    2004-01-01

    The electrocardiogram (ECG) signal is used extensively as a low cost diagnostic tool to provide information concerning the heart's state of health. Accurate determination of the QRS complex, in particular, reliable detection of the R wave peak, is essential in computer based ECG analysis. ECG data from Physionet's Sleep-Apnea database were used to develop, test, and validate a robust heart rate variability (HRV) signal derivation algorithm. The HRV signal was derived from pre-processed ECG signals by developing an enhanced Hilbert transform (EHT) algorithm with built-in missing beat detection capability for reliable QRS detection. The performance of the EHT algorithm was then compared against that of a popular Hilbert transform-based (HT) QRS detection algorithm. Autoregressive (AR) modeling of the HRV power spectrum for both EHT- and HT-derived HRV signals was achieved and different parameters from their power spectra as well as approximate entropy were derived for comparison. Poincare plots were then used as a visualization tool to highlight the detection of the missing beats in the EHT method After validation of the EHT algorithm on ECG data from the Physionet, the algorithm was further tested and validated on a dataset obtained from children undergoing polysomnography for detection of sleep disordered breathing (SDB). Sensitive measures of accurate HRV signals were then derived to be used in detecting and diagnosing sleep disordered breathing in children. All signal processing algorithms were implemented in MATLAB. We present a description of the EHT algorithm and analyze pilot data for eight children undergoing nocturnal polysomnography. The pilot data demonstrated that the EHT method provides an accurate way of deriving the HRV signal and plays an important role in extraction of reliable measures to distinguish between periods of normal and sleep disordered breathing (SDB) in children.

  14. Three-dimensional surface deformation derived from airborne interferometric UAVSAR: Application to the Slumgullion Landslide

    USGS Publications Warehouse

    Delbridge, Brent G.; Burgmann, Roland; Fielding, Eric; Hensley, Scott; Schulz, William

    2016-01-01

    In order to provide surface geodetic measurements with “landslide-wide” spatial coverage, we develop and validate a method for the characterization of 3-D surface deformation using the unique capabilities of the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) airborne repeat-pass radar interferometry system. We apply our method at the well-studied Slumgullion Landslide, which is 3.9 km long and moves persistently at rates up to ∼2 cm/day. A comparison with concurrent GPS measurements validates this method and shows that it provides reliable and accurate 3-D surface deformation measurements. The UAVSAR-derived vector velocity field measurements accurately capture the sharp boundaries defining previously identified kinematic units and geomorphic domains within the landslide. We acquired data across the landslide during spring and summer and identify that the landslide moves more slowly during summer except at its head, presumably in response to spatiotemporal variations in snowmelt infiltration. In order to constrain the mechanics controlling landslide motion from surface velocity measurements, we present an inversion framework for the extraction of slide thickness and basal geometry from dense 3-D surface velocity fields. We find that the average depth of the Slumgullion Landslide is 7.5 m, several meters less than previous depth estimates. We show that by considering a viscoplastic rheology, we can derive tighter theoretical bounds on the rheological parameter relating mean horizontal flow rate to surface velocity. Using inclinometer data for slow-moving, clay-rich landslides across the globe, we find a consistent value for the rheological parameter of 0.85 ± 0.08.

  15. Thawing quintessence with a nearly flat potential

    NASA Astrophysics Data System (ADS)

    Scherrer, Robert J.; Sen, A. A.

    2008-04-01

    The thawing quintessence model with a nearly flat potential provides a natural mechanism to produce an equation of state parameter, w, close to -1 today. We examine the behavior of such models for the case in which the potential satisfies the slow-roll conditions: [(1/V)(dV/dϕ)]2≪1 and (1/V)(d2V/dϕ2)≪1, and we derive the analog of the slow-roll approximation for the case in which both matter and a scalar field contribute to the density. We show that in this limit, all such models converge to a unique relation between 1+w, Ωϕ, and the initial value of (1/V)(dV/dϕ). We derive this relation and use it to determine the corresponding expression for w(a), which depends only on the presentday values for w and Ωϕ. For a variety of potentials, our limiting expression for w(a) is typically accurate to within δw≲0.005 for w<-0.9. For redshift z≲1, w(a) is well fit by the Chevallier-Polarski-Linder parametrization, in which w(a) is a linear function of a.

  16. Turbulent eddy diffusion models in exposure assessment - Determination of the eddy diffusion coefficient.

    PubMed

    Shao, Yuan; Ramachandran, Sandhya; Arnold, Susan; Ramachandran, Gurumurthy

    2017-03-01

    The use of the turbulent eddy diffusion model and its variants in exposure assessment is limited due to the lack of knowledge regarding the isotropic eddy diffusion coefficient, D T . But some studies have suggested a possible relationship between D T and the air changes per hour (ACH) through a room. The main goal of this study was to accurately estimate D T for a range of ACH values by minimizing the difference between the concentrations measured and predicted by eddy diffusion model. We constructed an experimental chamber with a spatial concentration gradient away from the contaminant source, and conducted 27 3-hr long experiments using toluene and acetone under different air flow conditions (0.43-2.89 ACHs). An eddy diffusion model accounting for chamber boundary, general ventilation, and advection was developed. A mathematical expression for the slope based on the geometrical parameters of the ventilation system was also derived. There is a strong linear relationship between D T and ACH, providing a surrogate parameter for estimating D T in real-life settings. For the first time, a mathematical expression for the relationship between D T and ACH has been derived that also corrects for non-ideal conditions, and the calculated value of the slope between these two parameters is very close to the experimentally determined value. The values of D T obtained from the experiments are generally consistent with values reported in the literature. They are also independent of averaging time of measurements, allowing for comparison of values obtained from different measurement settings. These findings make the use of turbulent eddy diffusion models for exposure assessment in workplace/indoor environments more practical.

  17. Reference standards for body fat measures using GE dual energy x-ray absorptiometry in Caucasian adults.

    PubMed

    Imboden, Mary T; Welch, Whitney A; Swartz, Ann M; Montoye, Alexander H K; Finch, Holmes W; Harber, Matthew P; Kaminsky, Leonard A

    2017-01-01

    Dual energy x-ray absorptiometry (DXA) is an established technique for the measurement of body composition. Reference values for these variables, particularly those related to fat mass, are necessary for interpretation and accurate classification of those at risk for obesity-related health complications and in need of lifestyle modifications (diet, physical activity, etc.). Currently, there are no reference values available for GE-Healthcare DXA systems and it is known that whole-body and regional fat mass measures differ by DXA manufacturer. To develop reference values by age and sex for DXA-derived fat mass measurements with GE-Healthcare systems. A de-identified sample of 3,327 participants (2,076 women, 1,251 men) was obtained from Ball State University's Clinical Exercise Physiology Laboratory and University of Wisconsin-Milwaukee's Physical Activity & Health Research Laboratory. All scans were completed using a GE Lunar Prodigy or iDXA and data reported included percent body fat (%BF), fat mass index (FMI), and ratios of android-to-gynoid (A/G), trunk/limb, and trunk/leg fat measurements. Percentiles were calculated and a factorial ANOVA was used to determine differences in the mean values for each variable between age and sex. Normative reference values for fat mass variables from DXA measurements obtained from GE-Healthcare DXA systems are presented as percentiles for both women and men in 10-year age groups. Women had higher (p<0.01) mean %BF and FMI than men, whereas men had higher (p<0.01) mean ratios of A/G, trunk/limb, and trunk/leg fat measurements than women. These reference values provide clinicians and researchers with a resource for interpretation of DXA-derived fat mass measurements specific to use with GE-Healthcare DXA systems.

  18. Accurate spectroscopic redshift of the multiply lensed quasar PSOJ0147 from the Pan-STARRS survey

    NASA Astrophysics Data System (ADS)

    Lee, C.-H.

    2017-09-01

    Context. The gravitational lensing time delay method provides a one-step determination of the Hubble constant (H0) with an uncertainty level on par with the cosmic distance ladder method. However, to further investigate the nature of the dark energy, a H0 estimate down to 1% level is greatly needed. This requires dozens of strongly lensed quasars that are yet to be delivered by ongoing and forthcoming all-sky surveys. Aims: In this work we aim to determine the spectroscopic redshift of PSOJ0147, the first strongly lensed quasar candidate found in the Pan-STARRS survey. The main goal of our work is to derive an accurate redshift estimate of the background quasar for cosmography. Methods: To obtain timely spectroscopically follow-up, we took advantage of the fast-track service programme that is carried out by the Nordic Optical Telescope. Using a grism covering 3200-9600 Å, we identified prominent emission line features, such as Lyα, N V, O I, C II, Si IV, C IV, and [C III] in the spectra of the background quasar of the PSOJ0147 lens system. This enables us to determine accurately the redshift of the background quasar. Results: The spectrum of the background quasar exhibits prominent absorption features bluewards of the strong emission lines, such as Lyα, N V, and C IV. These blue absorption lines indicate that the background source is a broad absorption line (BAL) quasar. Unfortunately, the BAL features hamper an accurate determination of redshift using the above-mentioned strong emission lines. Nevertheless, we are able to determine a redshift of 2.341 ± 0.001 from three of the four lensed quasar images with the clean forbidden line [C III]. In addition, we also derive a maximum outflow velocity of 9800 km s-1 with the broad absorption features bluewards of the C IV emission line. This value of maximum outflow velocity is in good agreement with other BAL quasars.

  19. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a, which confirmed that the system consists of two extremely massive stars and refined the values of the masses. It is the most massive binary known with an accurate mass determination.

  20. Comprehensive theoretical study towards the accurate proton affinity values of naturally occurring amino acids

    NASA Astrophysics Data System (ADS)

    Dinadayalane, T. C.; Sastry, G. Narahari; Leszczynski, Jerzy

    Systematic quantum chemical studies of Hartree-Fock (HF) and second-order Møller-Plesset (MP2) methods, and B3LYP functional, with a range of basis sets were employed to evaluate proton affinity values of all naturally occurring amino acids. The B3LYP and MP2 in conjunction with 6-311+G(d,p) basis set provide the proton affinity values that are in very good agreement with the experimental results, with an average deviation of ?1 kcal/mol. The number and the relative strength of intramolecular hydrogen bonding play a key role in the proton affinities of amino acids. The computational exploration of the conformers reveals that the global minima conformations of the neutral and protonated amino acids are different in eight cases. The present study reveals that B3LYP/6-311+G(d,p) is a very good choice of technique to evaluate the proton affinities of amino acids and the compounds derived from them reliably and economically.

  1. Nuclear relaxation and vibrational contributions to the static electrical properties of polyatomic molecules: beyond the Hartree-Fock approximation

    NASA Astrophysics Data System (ADS)

    Luis, Josep M.; Martí, Josep; Duran, Miquel; Andrés, JoséL.

    1997-04-01

    Electronic and nuclear contributions to the static molecular electrical properties, along with the Stark tuning rate ( δνE ) and the infrared cross section changes ( δSE) have been calculated at the SCF level and at different correlated levels of theory, using a TZ2P basis set and finite field techniques. Nuclear contributions to these molecular properties have also been calculated using a recent analytical approach that allow both to check the accuracy of the finite field values, and to evaluate the importance of higher-order derivatives. The HF, CO, H 2O, H 2CO, and CH 4 molecules have been studied and the results compared to experimental date when available. The paper shows that nuclear relaxation and vibrational contributions must be included in order to obtain accurate values of the static electrical properties. Two different, combined approaches are proposed to predict experimental values of the electrical properties to an error smaller than 5%.

  2. An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring

    NASA Astrophysics Data System (ADS)

    Chen, R.; Sun, Y. Y.; Lei, Y.

    2017-12-01

    With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.

  3. The Behavior of IAPWS-95 from 250 to 300 K and Pressures up to 400 MPa: Evaluation Based on Recently Derived Property Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Wolfgang, E-mail: Wagner@thermo.rub.de; Thol, Monika

    2015-12-15

    Over the past several years, considerable scientific and technical interest has been focused on accurate thermodynamic properties of fluid water covering part of the subcooled (metastable) region and the stable liquid from the melting line up to about 300 K and pressures up to several hundred MPa. Between 2000 and 2010, experimental density data were published whose accuracy was not completely clear. The scientific standard equation of state for fluid water, the IAPWS-95 formulation, was developed on the basis of experimental data for thermodynamic properties that were available by 1995. In this work, it is examined how IAPWS-95 behaves withmore » respect to the experimental data published after 1995. This investigation is carried out for temperatures from 250 to 300 K and pressures up to 400 MPa. The starting point is the assessment of the current data situation. This was mainly performed on the basis of data for the density, expansivity, compressibility, and isobaric heat capacity, which were derived in 2015 from very accurate speed-of-sound data. Apart from experimental data and these derived data, property values calculated from the recently published equation of state for this region of Holten et al. (2014) were also used. As a result, the unclear data situation could be clarified, and uncertainty values could be estimated for the investigated properties. In the region described above, detailed comparisons show that IAPWS-95 is able to represent the latest experimental data for the density, expansivity, compressibility, speed of sound, and isobaric heat capacity to within the uncertainties given in the release on IAPWS-95. Since the release does not contain uncertainty estimates for expansivities and compressibilities, the statement relates to the error propagation of the given uncertainty in density. Due to the lack of experimental data for the isobaric heat capacity for pressures above 100 MPa, no uncertainty estimates are given in the release for this pressure range. Results of the investigation of IAPWS-95 concerning its behavior with regard to the isobaric heat capacity in the high-pressure low-temperature region are also presented. Comparisons with very accurate speed-of-sound data published in 2012 showed that the uncertainty estimates of IAPWS-95 in speed of sound could be decreased for temperatures from 283 to 473 K and pressures up to 400 MPa.« less

  4. Nature of Driving Force for Protein Folding-- A Result From Analyzing the Statistical Potential

    NASA Astrophysics Data System (ADS)

    Li, Hao; Tang, Chao; Wingreen, Ned S.

    1998-03-01

    In a statistical approach to protein structure analysis, Miyazawa and Jernigan (MJ) derived a 20× 20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the MJ matrix can be accurately reconstructed from its first two principal component vectors as M_ij=C_0+C_1(q_i+q_j)+C2 qi q_j, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.

  5. Refraction effects of atmosphere on geodetic measurements to celestial bodies

    NASA Technical Reports Server (NTRS)

    Joshi, C. S.

    1973-01-01

    The problem is considered of obtaining accurate values of refraction corrections for geodetic measurements of celestial bodies. The basic principles of optics governing the phenomenon of refraction are defined, and differential equations are derived for the refraction corrections. The corrections fall into two main categories: (1) refraction effects due to change in the direction of propagation, and (2) refraction effects mainly due to change in the velocity of propagation. The various assumptions made by earlier investigators are reviewed along with the basic principles of improved models designed by investigators of the twentieth century. The accuracy problem for various quantities is discussed, and the conclusions and recommendations are summarized.

  6. A Field Assessment of a Prototype Meter for Measuring the Wet-Bulb Globe-Thermometer Index

    PubMed Central

    Walters, J. D.

    1968-01-01

    A prototype electronic instrument for the direct measurement of the wet-bulb globe-thermometer index is described. An assessment is made of its accuracy, as compared with W.B.G.T. indices calculated from conventional thermometric data, and a comparison is made between W.B.G.T. values read from the meter and effective or corrected effective temperatures derived from separate thermometric and air velocity recording instruments in the same climates. The instrument proved to be reliable and accurate over a wide range of climates and is a useful self-contained device for use in habitability surveys and similar investigations. Images PMID:5663429

  7. Measurement of Surface Interfacial Tension as a Function of Temperature Using Pendant Drop Images

    NASA Astrophysics Data System (ADS)

    Yakhshi-Tafti, Ehsan; Kumar, Ranganathan; Cho, Hyoung J.

    2011-10-01

    Accurate and reliable measurements of surface tension at the interface of immiscible phases are crucial to understanding various physico-chemical reactions taking place between those. Based on the pendant drop method, an optical (graphical)-numerical procedure was developed to determine surface tension and its dependency on the surrounding temperature. For modeling and experimental verification, chemically inert and thermally stable perfluorocarbon (PFC) oil and water was used. Starting with geometrical force balance, governing equations were derived to provide non-dimensional parameters which were later used to extract values for surface tension. Comparative study verified the accuracy and reliability of the proposed method.

  8. Application of the superposition principle to solar-cell analysis

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.

    1979-01-01

    The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).

  9. Cloaking of arbitrarily shaped objects with homogeneous coatings

    NASA Astrophysics Data System (ADS)

    Forestiere, Carlo; Dal Negro, Luca; Miano, Giovanni

    2014-05-01

    We present a theory for the cloaking of arbitrarily shaped objects and demonstrate electromagnetic scattering cancellation through designed homogeneous coatings. First, in the small-particle limit, we expand the dipole moment of a coated object in terms of its resonant modes. By zeroing the numerator of the resulting rational function, we accurately predict the permittivity values of the coating layer that abates the total scattered power. Then, we extend the applicability of the method beyond the small-particle limit, deriving the radiation corrections of the scattering-cancellation permittivity within a perturbation approach. Our method permits the design of invisibility cloaks for irregularly shaped devices such as complex sensors and detectors.

  10. IMPROVED Ti II log(gf) VALUES AND ABUNDANCE DETERMINATIONS IN THE PHOTOSPHERES OF THE SUN AND METAL-POOR STAR HD 84937

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, M. P.; Lawler, J. E.; Sneden, C.

    2013-10-01

    Atomic transition probability measurements for 364 lines of Ti II in the UV through near-IR are reported. Branching fractions from data recorded using a Fourier transform spectrometer (FTS) and a new echelle spectrometer are combined with published radiative lifetimes to determine these transition probabilities. The new results are in generally good agreement with previously reported FTS measurements. Use of the new echelle spectrometer, independent radiometric calibration methods, and independent data analysis routines enables a reduction of systematic errors and overall improvement in transition probability accuracy over previous measurements. The new Ti II data are applied to high-resolution visible and UVmore » spectra of the Sun and metal-poor star HD 84937 to derive new, more accurate Ti abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. The Ti abundances derived using Ti II for these two stars match those derived using Ti I and support the relative Ti/Fe abundance ratio versus metallicity seen in previous studies.« less

  11. Improved Model for Predicting the Free Energy Contribution of Dinucleotide Bulges to RNA Duplex Stability.

    PubMed

    Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M

    2015-09-01

    Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.

  12. QSAR models for thiophene and imidazopyridine derivatives inhibitors of the Polo-Like Kinase 1.

    PubMed

    Comelli, Nieves C; Duchowicz, Pablo R; Castro, Eduardo A

    2014-10-01

    The inhibitory activity of 103 thiophene and 33 imidazopyridine derivatives against Polo-Like Kinase 1 (PLK1) expressed as pIC50 (-logIC50) was predicted by QSAR modeling. Multivariate linear regression (MLR) was employed to model the relationship between 0D and 3D molecular descriptors and biological activities of molecules using the replacement method (MR) as variable selection tool. The 136 compounds were separated into several training and test sets. Two splitting approaches, distribution of biological data and structural diversity, and the statistical experimental design procedure D-optimal distance were applied to the dataset. The significance of the training set models was confirmed by statistically higher values of the internal leave one out cross-validated coefficient of determination (Q2) and external predictive coefficient of determination for the test set (Rtest2). The model developed from a training set, obtained with the D-optimal distance protocol and using 3D descriptor space along with activity values, separated chemical features that allowed to distinguish high and low pIC50 values reasonably well. Then, we verified that such model was sufficient to reliably and accurately predict the activity of external diverse structures. The model robustness was properly characterized by means of standard procedures and their applicability domain (AD) was analyzed by leverage method. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data

    NASA Astrophysics Data System (ADS)

    MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.

    2015-12-01

    Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.

  14. A Highly Accurate Technique for the Treatment of Flow Equations at the Polar Axis in Cylindrical Coordinates using Series Expansions. Appendix A

    NASA Technical Reports Server (NTRS)

    Constantinescu, George S.; Lele, S. K.

    2001-01-01

    Numerical methods for solving the flow equations in cylindrical or spherical coordinates should be able to capture the behavior of the exact solution near the regions where the particular form of the governing equations is singular. In this work we focus on the treatment of these numerical singularities for finite-differences methods by reinterpreting the regularity conditions developed in the context of pseudo-spectral methods. A generally applicable numerical method for treating the singularities present at the polar axis, when nonaxisymmetric flows are solved in cylindrical, coordinates using highly accurate finite differences schemes (e.g., Pade schemes) on non-staggered grids, is presented. Governing equations for the flow at the polar axis are derived using series expansions near r=0. The only information needed to calculate the coefficients in these equations are the values of the flow variables and their radial derivatives at the previous iteration (or time) level. These derivatives, which are multi-valued at the polar axis, are calculated without dropping the accuracy of the numerical method using a mapping of the flow domain from (0,R)*(0,2pi) to (-R,R)*(0,pi), where R is the radius of the computational domain. This allows the radial derivatives to be evaluated using high-order differencing schemes (e.g., compact schemes) at points located on the polar axis. The proposed technique is illustrated by results from simulations of laminar-forced jets and turbulent compressible jets using large eddy simulation (LES) methods. In term of the general robustness of the numerical method and smoothness of the solution close to the polar axis, the present results compare very favorably to similar calculations in which the equations are solved in Cartesian coordinates at the polar axis, or in which the singularity is removed by employing a staggered mesh in the radial direction without a mesh point at r=0, following the method proposed recently by Mohseni and Colonius (1). Extension of the method described here for incompressible flows or for any other set of equations that are solved on a non-staggered mesh in cylindrical or spherical coordinates with finite-differences schemes of various level of accuracy is immediate.

  15. Validation of a novel protocol for calculating estimated energy requirements and average daily physical activity ratio for the US population: 2005-2006.

    PubMed

    Archer, Edward; Hand, Gregory A; Hébert, James R; Lau, Erica Y; Wang, Xuewen; Shook, Robin P; Fayad, Raja; Lavie, Carl J; Blair, Steven N

    2013-12-01

    To validate the PAR protocol, a novel method for calculating population-level estimated energy requirements (EERs) and average physical activity ratio (APAR), in a nationally representative sample of US adults. Estimates of EER and APAR values were calculated via a factorial equation from a nationally representative sample of 2597 adults aged 20 and 74 years (US National Health and Nutrition Examination Survey; data collected between January 1, 2005, and December 31, 2006). Validation of the PAR protocol-derived EER (EER(PAR)) values was performed via comparison with values from the Institute of Medicine EER equations (EER(IOM)). The correlation between EER(PAR) and EER(IOM) was high (0.98; P<.001). The difference between EER(PAR) and EER(IOM) values ranged from 40 kcal/d (1.2% higher than EER(IOM)) in obese (body mass index [BMI] ≥30) men to 148 kcal/d (5.7% higher) in obese women. The 2005-2006 EERs for the US population were 2940 kcal/d for men and 2275 kcal/d for women and ranged from 3230 kcal/d in obese (BMI ≥30) men to 2026 kcal/d in normal weight (BMI <25) women. There were significant inverse relationships between APAR and both obesity and age. For men and women, the APAR values were 1.53 and 1.52, respectively. Obese men and women had lower APAR values than normal weight individuals (P¼.023 and P¼.015, respectively) [corrected], and younger individuals had higher APAR values than older individuals (P<.001). The PAR protocol is an accurate method for deriving nationally representative estimates of EER and APAR values. These descriptive data provide novel quantitative baseline values for future investigations into associations of physical activity and health. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  16. SU-C-BRA-02: Gradient Based Method of Target Delineation On PET/MR Image of Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dance, M; Chera, B; Falchook, A

    2015-06-15

    Purpose: Validate the consistency of a gradient-based segmentation tool to facilitate accurate delineation of PET/CT-based GTVs in head and neck cancers by comparing against hybrid PET/MR-derived GTV contours. Materials and Methods: A total of 18 head and neck target volumes (10 primary and 8 nodal) were retrospectively contoured using a gradient-based segmentation tool by two observers. Each observer independently contoured each target five times. Inter-observer variability was evaluated via absolute percent differences. Intra-observer variability was examined by percentage uncertainty. All target volumes were also contoured using the SUV percent threshold method. The thresholds were explored case by case so itsmore » derived volume matched with the gradient-based volume. Dice similarity coefficients (DSC) were calculated to determine overlap of PET/CT GTVs and PET/MR GTVs. Results: The Levene’s test showed there was no statistically significant difference of the variances between the observer’s gradient-derived contours. However, the absolute difference between the observer’s volumes was 10.83%, with a range from 0.39% up to 42.89%. PET-avid regions with qualitatively non-uniform shapes and intensity levels had a higher absolute percent difference near 25%, while regions with uniform shapes and intensity levels had an absolute percent difference of 2% between observers. The average percentage uncertainty between observers was 4.83% and 7%. As the volume of the gradient-derived contours increased, the SUV threshold percent needed to match the volume decreased. Dice coefficients showed good agreement of the PET/CT and PET/MR GTVs with an average DSC value across all volumes at 0.69. Conclusion: Gradient-based segmentation of PET volume showed good consistency in general but can vary considerably for non-uniform target shapes and intensity levels. PET/CT-derived GTV contours stemming from the gradient-based tool show good agreement with the anatomically and metabolically more accurate PET/MR-derived GTV contours, but tumor delineation accuracy can be further improved with the use PET/MR.« less

  17. Braking Index of Isolated Pulsars

    NASA Astrophysics Data System (ADS)

    Hamil, Oliver; Stone, Jirina; Urbanec, Martin; Urbancova, Gabriela

    2015-04-01

    Isolated pulsars are rotating neutron stars with accurately measured angular velocities Ω, and their time derivatives which show unambiguously that the pulsars are slowing down. The exact mechanism of the spin-down is a question of debate in detail, but the commonly accepted view is that it arises through emission of magnetic dipole radiation (MDR). The energy loss by a rotating pulsar is proportional to a model dependent power of Ω. This relation leads to the power law Ω˙ = -K Ωn where n is called the braking index, equal to the ratio (ΩΩ̈)/ Ω˙2 . The simple MDR model predicts the value of n = 3, but observations of isolated pulsars provide rather precise values of n, individually accurate to a few percent or better, in the range 1 < n < 2.8, which is consistently less than the predictions of the MDR model. In this work, we study the dynamical limits of the MDR model as a function of angular velocity. The effects of variation in the rest mass, the moment of inertia, and the dependence on a realistic Equation of State of the rotating star are considered. Furthermore, we introduce a simulated superfluid effect by which the angular momentum of the core is eliminated from the calculation.

  18. Developing Electronic Health Record Algorithms That Accurately Identify Patients With Systemic Lupus Erythematosus.

    PubMed

    Barnado, April; Casey, Carolyn; Carroll, Robert J; Wheless, Lee; Denny, Joshua C; Crofford, Leslie J

    2017-05-01

    To study systemic lupus erythematosus (SLE) in the electronic health record (EHR), we must accurately identify patients with SLE. Our objective was to develop and validate novel EHR algorithms that use International Classification of Diseases, Ninth Revision (ICD-9), Clinical Modification codes, laboratory testing, and medications to identify SLE patients. We used Vanderbilt's Synthetic Derivative, a de-identified version of the EHR, with 2.5 million subjects. We selected all individuals with at least 1 SLE ICD-9 code (710.0), yielding 5,959 individuals. To create a training set, 200 subjects were randomly selected for chart review. A subject was defined as a case if diagnosed with SLE by a rheumatologist, nephrologist, or dermatologist. Positive predictive values (PPVs) and sensitivity were calculated for combinations of code counts of the SLE ICD-9 code, a positive antinuclear antibody (ANA), ever use of medications, and a keyword of "lupus" in the problem list. The algorithms with the highest PPV were each internally validated using a random set of 100 individuals from the remaining 5,759 subjects. The algorithm with the highest PPV at 95% in the training set and 91% in the validation set was 3 or more counts of the SLE ICD-9 code, ANA positive (≥1:40), and ever use of both disease-modifying antirheumatic drugs and steroids, while excluding individuals with systemic sclerosis and dermatomyositis ICD-9 codes. We developed and validated the first EHR algorithm that incorporates laboratory values and medications with the SLE ICD-9 code to identify patients with SLE accurately. © 2016, American College of Rheumatology.

  19. Improving xylem hydraulic conductivity measurements by correcting the error caused by passive water uptake.

    PubMed

    Torres-Ruiz, José M; Sperry, John S; Fernández, José E

    2012-10-01

    Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.

  20. Pedagogical Reflections by Secondary Science Teachers at Different NOS Implementation Levels

    NASA Astrophysics Data System (ADS)

    Herman, Benjamin C.; Clough, Michael P.; Olson, Joanne K.

    2017-02-01

    This study investigated what 13 secondary science teachers at various nature of science (NOS) instruction implementation levels talked about when they reflected on their teaching. We then determined if differences exist in the quality of those reflections between high, medium, and low NOS implementers. This study sought to answer the following questions: (1) What do teachers talk about when asked general questions about their pedagogy and NOS pedagogy and (2) what qualitative differences, if any, exist within variables across teachers of varying NOS implementation levels? Evidence derived from these teachers' reflections indicated that self-efficacy and perceptions of general importance for NOS instruction were poor indicators of NOS implementation. However, several factors were associated with the extent that these teachers implemented NOS instruction, including the utility value they hold for NOS teaching, considerations of how people learn, understanding of NOS pedagogy, and their ability to accurately and deeply self-reflect about teaching. Notably, those teachers who effectively implemented the NOS at higher levels value NOS instruction for reasons that transcend immediate instructional objectives. That is, they value teaching NOS for achieving compelling ends realized long after formal schooling (e.g., lifelong socioscientific decision-making for civic reasons), and they deeply reflect about how to teach NOS by drawing from research about how people learn. Low NOS implementers' simplistic notions and reflections about teaching and learning appeared to be impeding factors to accurate and consistent NOS implementation. This study has implications for science teacher education efforts that promote NOS instruction.

  1. Higher-dimensional Wannier functions of multiparameter Hamiltonians

    NASA Astrophysics Data System (ADS)

    Hanke, Jan-Philipp; Freimuth, Frank; Blügel, Stefan; Mokrousov, Yuriy

    2015-05-01

    When using Wannier functions to study the electronic structure of multiparameter Hamiltonians H(k ,λ ) carrying a dependence on crystal momentum k and an additional periodic parameter λ , one usually constructs several sets of Wannier functions for a set of values of λ . We present the concept of higher-dimensional Wannier functions (HDWFs), which provide a minimal and accurate description of the electronic structure of multiparameter Hamiltonians based on a single set of HDWFs. The obstacle of nonorthogonality of Bloch functions at different λ is overcome by introducing an auxiliary real space, which is reciprocal to the parameter λ . We derive a generalized interpolation scheme and emphasize the essential conceptual and computational simplifications in using the formalism, for instance, in the evaluation of linear response coefficients. We further implement the necessary machinery to construct HDWFs from ab initio within the full potential linearized augmented plane-wave method (FLAPW). We apply our implementation to accurately interpolate the Hamiltonian of a one-dimensional magnetic chain of Mn atoms in two important cases of λ : (i) the spin-spiral vector q and (ii) the direction of the ferromagnetic magnetization m ̂. Using the generalized interpolation of the energy, we extract the corresponding values of magnetocrystalline anisotropy energy, Heisenberg exchange constants, and spin stiffness, which compare very well with the values obtained from direct first principles calculations. For toy models we demonstrate that the method of HDWFs can also be used in applications such as the virtual crystal approximation, ferroelectric polarization, and spin torques.

  2. Estimation of snow in extratropical cyclones from multiple frequency airborne radar observations. An Expectation-Maximization approach

    NASA Astrophysics Data System (ADS)

    Grecu, M.; Tian, L.; Heymsfield, G. M.

    2017-12-01

    A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.

  3. Rayleigh Scattering in Planetary Atmospheres: Corrected Tables Through Accurate Computation of X and Y Functions

    NASA Astrophysics Data System (ADS)

    Natraj, Vijay; Li, King-Fai; Yung, Yuk L.

    2009-02-01

    Tables that have been used as a reference for nearly 50 years for the intensity and polarization of reflected and transmitted light in Rayleigh scattering atmospheres have been found to be inaccurate, even to four decimal places. We convert the integral equations describing the X and Y functions into a pair of coupled integro-differential equations that can be efficiently solved numerically. Special care has been taken in evaluating Cauchy principal value integrals and their derivatives that appear in the solution of the Rayleigh scattering problem. The new approach gives results accurate to eight decimal places for the entire range of tabulation (optical thicknesses 0.02-1.0, surface reflectances 0-0.8, solar and viewing zenith angles 0°-88.85°, and relative azimuth angles 0°-180°), including the most difficult case of direct transmission in the direction of the sun. Revised tables have been created and stored electronically for easy reference by the planetary science and astrophysics community.

  4. Using iRT, a normalized retention time for more targeted measurement of peptides

    PubMed Central

    Escher, Claudia; Reiter, Lukas; MacLean, Brendan; Ossola, Reto; Herzog, Franz; Chilton, John; MacCoss, Michael J.; Rinner, Oliver

    2014-01-01

    Multiple reaction monitoring (MRM) has recently become the method of choice for targeted quantitative measurement of proteins using mass spectrometry. The method, however, is limited in the number of peptides that can be measured in one run. This number can be markedly increased by scheduling the acquisition if the accurate retention time (RT) of each peptide is known. Here we present iRT, an empirically derived dimensionless peptide-specific value that allows for highly accurate RT prediction. The iRT of a peptide is a fixed number relative to a standard set of reference iRT-peptides that can be transferred across laboratories and chromatographic systems. We show that iRT facilitates the setup of multiplexed experiments with acquisition windows more than 4 times smaller compared to in silico RT predictions resulting in improved quantification accuracy. iRTs can be determined by any laboratory and shared transparently. The iRT concept has been implemented in Skyline, the most widely used software for MRM experiments. PMID:22577012

  5. A pilot study to explore the feasibility of using theClinical Care Classification System for developing a reliable costing method for nursing services.

    PubMed

    Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K

    2013-01-01

    While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.

  6. Comparative Validation of the Determination of Sofosbuvir in Pharmaceuticals by Several Inexpensive Ecofriendly Chromatographic, Electrophoretic, and Spectrophotometric Methods.

    PubMed

    El-Yazbi, Amira F

    2017-07-01

    Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virus infection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with P-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.

  7. Shuttle orbiter boundary layer transition at flight and wind tunnel conditions

    NASA Technical Reports Server (NTRS)

    Goodrich, W. D.; Derry, S. M.; Bertin, J. J.

    1983-01-01

    Hypersonic boundary layer transition data obtained on the windward centerline of the Shuttle orbiter during entry for the first five flights are presented and analyzed. Because the orbiter surface is composed of a large number of thermal protection tiles, the transition data include the effects of distributed roughness arising from tile misalignment and gaps. These data are used as a benchmark for assessing and improving the accuracy of boundary layer transition predictions based on correlations of wind tunnel data taken on both aerodynamically rough and smooth orbiter surfaces. By comparing these two data bases, the relative importance of tunnel free stream noise and surface roughness on orbiter boundary layer transition correlation parameters can be assessed. This assessment indicates that accurate predications of transition times can be made for the orbiter at hypersonic flight conditions by using roughness dominated wind tunnel data. Specifically, times of transition onset and completion is accurately predicted using a correlation based on critical and effective values of a roughness Reynolds number previously derived from wind tunnel data.

  8. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  9. Home blood pressure measurement as a screening tool for hypertension in a web-based worksite health promotion programme.

    PubMed

    Niessen, Maurice A J; van der Hoeven, Niels V; van den Born, Bert-Jan H; van Kalken, Coen K; Kraaijenhagen, Roderik A

    2014-10-01

    Guidelines on home blood pressure measurement (HBPM) recommend taking at least 12 measurements. For screening purposes, however, it is preferred to reduce this number. We therefore derived and validated cut-off values to determine hypertension status after the first duplicate reading of a HBPM series in a web-based worksite health promotion programme. Nine hundred forty-five employees were included in the derivation and 528 in the validation cohort, which was divided into a normal (n = 297) and increased cardiometabolic risk subgroup (n = 231), and a subgroup with a history of hypertension (n = 98). Six duplicate home measurements were collected during three consecutive days. Systolic and diastolic readings at the first duplicate measurement were used as predictors for hypertension in a multivariate logistic model. Cut-off values were determined using receiver operating characteristics analysis. Upper (≥ 150 or ≥ 95 mmHg) and lower limit (<135 and <80 mmHg) cut-off values were derived to confirm or reject presence of hypertension after one duplicate reading. The area under the curve was 0.94 (standard error 0.01, 95% confidence interval 0.93-0.95). In 62.5% of participants, hypertension status was determined, with 1.1% false positive and 4.7% false negatives. Performance was similar in participants with high and low cardiometabolic risk, but worse in participants with a history of hypertension (10.4% false negatives). One duplicate home reading is sufficient to accurately assess hypertension status in 62.5% of participants, leaving 37.5% in which the whole HBPM series needs to be completed. HBPM can thus be reliably used as screening tool for hypertension in a working population. © The Author 2013. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  10. Poly(aspartic acid) with adjustable pH-dependent solubility.

    PubMed

    Németh, Csaba; Gyarmati, Benjámin; Abdullin, Timur; László, Krisztina; Szilágyi, András

    2017-02-01

    Poly(aspartic acid) (PASP) derivatives with adjustable pH-dependent solubility were synthesized and characterized to establish the relationship between their structure and solubility in order to predict their applicability as a basic material for enteric coatings. Polysuccinimide, the precursor of PASP, was modified with short chain alkylamines, and the residual succinimide rings were subsequently opened to prepare the corresponding PASP derivatives. Study of the effect of the type and concentration of the side groups on the pH-dependent solubility of PASP showed that solubility can be adjusted by proper selection of the chemical structure. The Henderson-Hasselbalch (HH) and the extended HH equations were used to describe the pH-dependent solubility of the polymers quantitatively. The estimate provided by the HH equation is poor, but an accurate description of the pH-dependent solubility can be found with the extended HH equation. The dissolution rate of a polymer film prepared from a selected PASP derivative was determined by fluorescence marking. The film dissolved rapidly when the pH was increased above its pK a . Cellular viability tests show that PASP derivatives are non-toxic to a human cell line. These polymers are thus of great interest as starting materials for enteric coatings. Poly(amino acid) type biocompatible polymers were synthesized for future use as pharmaceutical film coatings. To this end, we tailored the pH-dependent solubility of poly(aspartic acid) (PASP). It was found that both the solubility and the pK a values of the modified PASP depended strongly on composition. Fluorescent marking was used to characterize the dissolution of a chosen PASP derivative. In acidic media only a negligible amount of the polymer dissolved, but dissolution was very fast and complete at the pH values that prevail in the small intestine. As a consequence, enteric coatings based on such PASP derivatives may be used for drug delivery in the gastrointestinal tract. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  11. Deriving stellar parameters with the SME software package

    NASA Astrophysics Data System (ADS)

    Piskunov, N.

    2017-09-01

    Photometry and spectroscopy are complementary tools for deriving accurate stellar parameters. Here I present one of the popular packages for stellar spectroscopy called SME with the emphasis on the latest developments and error assessment for the derived parameters.

  12. Refining Ovarian Cancer Test accuracy Scores (ROCkeTS): protocol for a prospective longitudinal test accuracy study to validate new risk scores in women with symptoms of suspected ovarian cancer.

    PubMed

    Sundar, Sudha; Rick, Caroline; Dowling, Francis; Au, Pui; Snell, Kym; Rai, Nirmala; Champaneria, Rita; Stobart, Hilary; Neal, Richard; Davenport, Clare; Mallett, Susan; Sutton, Andrew; Kehoe, Sean; Timmerman, Dirk; Bourne, Tom; Van Calster, Ben; Gentry-Maharaj, Aleksandra; Menon, Usha; Deeks, Jon

    2016-08-09

    Ovarian cancer (OC) is associated with non-specific symptoms such as bloating, making accurate diagnosis challenging: only 1 in 3 women with OC presents through primary care referral. National Institute for Health and Care Excellence guidelines recommends sequential testing with CA125 and routine ultrasound in primary care. However, these diagnostic tests have limited sensitivity or specificity. Improving accurate triage in women with vague symptoms is likely to improve mortality by streamlining referral and care pathways. The Refining Ovarian Cancer Test Accuracy Scores (ROCkeTS; HTA 13/13/01) project will derive and validate new tests/risk prediction models that estimate the probability of having OC in women with symptoms. This protocol refers to the prospective study only (phase III). ROCkeTS comprises four parallel phases. The full ROCkeTS protocol can be found at http://www.birmingham.ac.uk/ROCKETS. Phase III is a prospective test accuracy study. The study will recruit 2450 patients from 15 UK sites. Recruited patients complete symptom and anxiety questionnaires, donate a serum sample and undergo ultrasound scored as per International Ovarian Tumour Analysis (IOTA) criteria. Recruitment is at rapid access clinics, emergency departments and elective clinics. Models to be evaluated include those based on ultrasound derived by the IOTA group and novel models derived from analysis of existing data sets. Estimates of sensitivity, specificity, c-statistic (area under receiver operating curve), positive predictive value and negative predictive value of diagnostic tests are evaluated and a calibration plot for models will be presented. ROCkeTS has received ethical approval from the NHS West Midlands REC (14/WM/1241) and is registered on the controlled trials website (ISRCTN17160843) and the National Institute of Health Research Cancer and Reproductive Health portfolios. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. The Next Generation of High-Speed Dynamic Stability Wind Tunnel Testing (Invited)

    NASA Technical Reports Server (NTRS)

    Tomek, Deborah M.; Sewall, William G.; Mason, Stan E.; Szchur, Bill W. A.

    2006-01-01

    Throughout industry, accurate measurement and modeling of dynamic derivative data at high-speed conditions has been an ongoing challenge. The expansion of flight envelopes and non-conventional vehicle design has greatly increased the demand for accurate prediction and modeling of vehicle dynamic behavior. With these issues in mind, NASA Langley Research Center (LaRC) embarked on the development and shakedown of a high-speed dynamic stability test technique that addresses the longstanding problem of accurately measuring dynamic derivatives outside the low-speed regime. The new test technique was built upon legacy technology, replacing an antiquated forced oscillation system, and greatly expanding the capabilities beyond classic forced oscillation testing at both low and high speeds. The modern system is capable of providing a snapshot of dynamic behavior over a periodic cycle for varying frequencies, not just a damping derivative term at a single frequency.

  14. Non-equilibrium thermionic electron emission for metals at high temperatures

    NASA Astrophysics Data System (ADS)

    Domenech-Garret, J. L.; Tierno, S. P.; Conde, L.

    2015-08-01

    Stationary thermionic electron emission currents from heated metals are compared against an analytical expression derived using a non-equilibrium quantum kappa energy distribution for the electrons. The latter depends on the temperature decreasing parameter κ ( T ) , which decreases with increasing temperature and can be estimated from raw experimental data and characterizes the departure of the electron energy spectrum from equilibrium Fermi-Dirac statistics. The calculations accurately predict the measured thermionic emission currents for both high and moderate temperature ranges. The Richardson-Dushman law governs electron emission for large values of kappa or equivalently, moderate metal temperatures. The high energy tail in the electron energy distribution function that develops at higher temperatures or lower kappa values increases the emission currents well over the predictions of the classical expression. This also permits the quantitative estimation of the departure of the metal electrons from the equilibrium Fermi-Dirac statistics.

  15. Phase-Coherent Measurement of the Hydrogen 1S-2S Transition Frequency with an Optical Frequency Interval Divider Chain

    NASA Astrophysics Data System (ADS)

    Udem, Th.; Huber, A.; Gross, B.; Reichert, J.; Prevedelli, M.; Weitz, M.; Hänsch, T. W.

    1997-10-01

    We have measured the absolute frequency of the hydrogen 1S-2S two-photon resonance with an accuracy of 3.4 parts in 1013 by comparing it with the 28th harmonic of a methane-stabilized 3.39 μm He-Ne laser. A frequency mismatch of 2.1 THz at the 7th harmonic is bridged with a phase-locked chain of five optical frequency interval dividers. From the measured frequency f1S-2S = 2 466 061 413 187.34\\(84\\) kHz and published data of other authors we derive precise new values of the Rydberg constant, R∞ = 10 973 731.568 639\\(91\\) m-1 and of the Lamb shift of the 1S ground state, L1S = 8172.876\\(29\\) MHz. These are now the most accurate values available.

  16. 3-D surface profilometry based on modulation measurement by applying wavelet transform method

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao

    2017-01-01

    A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.

  17. Accurate collision-induced line-coupling parameters for the fundamental band of CO in He - Close coupling and coupled states scattering calculations

    NASA Technical Reports Server (NTRS)

    Green, Sheldon; Boissoles, J.; Boulet, C.

    1988-01-01

    The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.

  18. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  19. Tracking unaccounted water use in data sparse arid environment

    NASA Astrophysics Data System (ADS)

    Hafeez, M. M.; Edraki, M.; Ullah, M. K.; Chemin, Y.; Sixsmith, J.; Faux, R.

    2009-12-01

    Hydrological knowledge of irrigated farms within the inundation plains of the Murray Darling Basin (MDB) is very limited in quality and reliability of the observation network that has been declining rapidly over the past decade. This paper focuses on Land Surface Diversions (LSD) that encompass all forms of surface water diversion except the direct extraction of water from rivers, watercourses and lakes by farmers for the purposes of irrigation and stock and domestic supply. Its accurate measurement is very challenging, due to the practical difficulties associated with separating the different components of LSD and estimating them accurately for a large catchment. The inadequacy of current methods of measuring and monitoring LSD poses severe limitations on existing and proposed policies for managing such diversions. It is commonly believed that LSD comprises 20-30% of total diversions from river valleys in the MDB areas. But, scientific estimates of LSD do not exist, because they were considered unimportant prior the onset of recent draught in Australia. There is a need to develop hydrological water balance models through the coupling of hydrological variables derived from on ground hydrological measurements and remote sensing techniques to accurately model LSD. Typically, the hydrological water balance components for farm/catchment scale models includes: irrigation inflow, outflow, rainfall, runoff, evapotranspiration, soil moisture change and deep percolation. The actual evapotranspiration (ETa) is the largest and single most important component of hydrological water balance model. An accurate quantification of all components of hydrological water balance model at farm/catchment scale is of prime importance to estimate the volume of LSD. A hydrological water balance model is developed to calculate LSD at 6 selected pilot farms. The catchment hydrological water balance model is being developed by using selected parameters derived from hydrological water balance model at farm scale. LSD results obtained through the modelling process have been compared with LSD estimates measured with the ground observed data at 6 pilot farms. The differences between the values are between 3 to 5 percent of the water inputs which is within the confidence limit expected from such analysis. Similarly, the LSD values at the catchment scale have been estimated with a great confidence. The hydrological water balance models at farm and catchment scale provide reliable quantification of LSD. Improved LSD estimates can guide water management decisions at farm to catchment scale and could be instrumental for enhancing the integrity of the water allocation process and making them fairer and equitable across stakeholders.

  20. Estimation of stature from radiologic anthropometry of the lumbar vertebral dimensions in Chinese.

    PubMed

    Zhang, Kui; Chang, Yun-feng; Fan, Fei; Deng, Zhen-hua

    2015-11-01

    The recent study was to assess the relationship between the radiologic anthropometry of the lumbar vertebral dimensions and stature in Chinese and to develop regression formulae to estimate stature from these dimensions. A total of 412 normal, healthy volunteers, comprising 206 males and 206 females, were recruited. The linear regression analysis were performed to assess the correlation between the stature and lengths of various segments of the lumbar vertebral column. Among the regression equations created for single variable, the predictive value was greatest for the reconstruction of stature from the lumbar segment in both sexes and subgroup analysis. When individual vertebral body was used, the heights of posterior vertebral body of L3 gave the most accurate results for male group, the heights of central vertebral body of L1 provided the most accurate results for female group and female group with age above 45 years, the heights of central vertebral body of L3 gave the most accurate results for the groups with age from 20-45 years for both sexes and the male group with age above 45 years. The heights of anterior vertebral body of L5 gave the less accurate results except for the heights of anterior vertebral body of L4 provided the less accurate result for the male group with age above 45 years. As expected, multiple regression equations were more successful than equations derived from a single variable. The research observations suggest lumbar vertebral dimensions to be useful in stature estimation among Chinese population. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage under clinical imaging conditions.

    PubMed

    Lukas, Vanessa A; Fishbein, Kenneth W; Reiter, David A; Lin, Ping-Chang; Schneider, Erika; Spencer, Richard G

    2015-07-01

    To evaluate the sensitivity and specificity of classification of pathomimetically degraded bovine nasal cartilage at 3 Tesla and 37°C using univariate MRI measurements of both pure parameter values and intensities of parameter-weighted images. Pre- and posttrypsin degradation values of T1 , T2 , T2 *, magnetization transfer ratio (MTR), and apparent diffusion coefficient (ADC), and corresponding weighted images, were analyzed. Classification based on the Euclidean distance was performed and the quality of classification was assessed through sensitivity, specificity and accuracy (ACC). The classifiers with the highest accuracy values were ADC (ACC = 0.82 ± 0.06), MTR (ACC = 0.78 ± 0.06), T1 (ACC = 0.99 ± 0.01), T2 derived from a three-dimensional (3D) spin-echo sequence (ACC = 0.74 ± 0.05), and T2 derived from a 2D spin-echo sequence (ACC = 0.77 ± 0.06), along with two of the diffusion-weighted signal intensities (b = 333 s/mm(2) : ACC = 0.80 ± 0.05; b = 666 s/mm(2) : ACC = 0.85 ± 0.04). In particular, T1 values differed substantially between the groups, resulting in atypically high classification accuracy. The second-best classifier, diffusion weighting with b = 666 s/mm(2) , as well as all other parameters evaluated, exhibited substantial overlap between pre- and postdegradation groups, resulting in decreased accuracies. Classification according to T1 values showed excellent test characteristics (ACC = 0.99), with several other parameters also showing reasonable performance (ACC > 0.70). Of these, diffusion weighting is particularly promising as a potentially practical clinical modality. As in previous work, we again find that highly statistically significant group mean differences do not necessarily translate into accurate clinical classification rules. © 2014 Wiley Periodicals, Inc.

  2. Whole-lesion histogram analysis metrics of the apparent diffusion coefficient as a marker of breast lesions characterization at 1.5 T.

    PubMed

    Bougias, H; Ghiatas, A; Priovolos, D; Veliou, K; Christou, A

    2017-05-01

    To retrospectively assess the role of whole-lesion apparent diffusion coefficient (ADC) in the characterization of breast tumors by comparing different histogram metrics. 49 patients with 53 breast lesions underwent magnetic resonance imaging (MRI). ADC histogram parameters, including the mean, mode, 10th/50th/90th percentile, skewness, kurtosis, and entropy ADCs, were derived for the whole-lesion volume in each patient. Mann-Whitney U-test, area under the receiver-operating characteristic curve (AUC) were used for statistical analysis. The mean, mode and 10th/50th/90th percentile ADC values were significantly lower in malignant lesions compared with benign ones (all P < 0.0001), while skewness was significantly higher in malignant lesions P = 0.02. However, no significant difference was found between entropy and kurtosis values in malignant lesions compared with benign ones (P = 0.06 and P = 1.00, respectively). Univariate logistic regression showed that 10th and 50th percentile ADC yielded the highest AUC (0.985; 95% confidence interval [CI]: 0.902, 1.000 and 0.982; 95% confidence interval [CI]: 0.896, 1.000 respectively), whereas kurtosis value yielded the lowest AUC (0.500; 95% CI: 0.355, 0.645), indicating that 10th and 50th percentile ADC values may be more accurate for lesion discrimination. Whole-lesion ADC histogram analysis could be a helpful index in the characterization and differentiation between benign and malignant breast lesions with the 10th and 50th percentile ADC be the most accurate discriminators. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  3. A gravimetric method for the measurement of total spontaneous activity in rats.

    PubMed

    Biesiadecki, B J; Brand, P H; Koch, L G; Britton, S L

    1999-10-01

    Currently available methods for the measurement of spontaneous activity of laboratory animals require expensive, specialized equipment and may not be suitable for use in low light conditions with nocturnal species. We developed a gravimetric method that uses common laboratory equipment to quantify the total spontaneous activity of rats and is suitable for use in the dark. The rat in its home cage is placed on a top-loading electronic balance interfaced to a computer. Movements are recorded by the balance as changes in weight and transmitted to the computer at 10 Hz. Data are analyzed on-line to derive the absolute value of the difference in weight between consecutive samples, and the one-second average of the absolute values is calculated. The averages are written to file for off-line analysis and summed over the desired observation period to provide a measure of total spontaneous activity. The results of in vitro experiments demonstrated that: 1) recorded weight changes were not influenced by position of the weight on the bottom of the cage, 2) values recorded from a series of weight changes were not significantly different from the calculated values, 3) the constantly decreasing force exerted by a swinging pendulum placed on the balance was accurately recorded, 4) the measurement of activity was not influenced by the evaporation of a fluid such as urine, and 5) the method can detect differences in the activity of sleeping and waking rats over a 10-min period, as well as during 4-hr intervals recorded during active (night-time) and inactive (daytime) periods. These results demonstrate that this method provides an inexpensive, accurate, and noninvasive method to quantitate the spontaneous activity of small animals.

  4. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation.

    PubMed

    Saatchi, Mahdi; McClure, Mathew C; McKay, Stephanie D; Rolf, Megan M; Kim, JaeWoo; Decker, Jared E; Taxis, Tasia M; Chapple, Richard H; Ramey, Holly R; Northcutt, Sally L; Bauck, Stewart; Woodward, Brent; Dekkers, Jack C M; Fernando, Rohan L; Schnabel, Robert D; Garrick, Dorian J; Taylor, Jeremy F

    2011-11-28

    Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but the recurrent inclusion of genotyped sires in retraining analyses will be necessary to routinely produce for the industry the direct genomic values with the highest accuracy.

  5. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation

    PubMed Central

    2011-01-01

    Background Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Methods Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Results Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. Conclusions These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but the recurrent inclusion of genotyped sires in retraining analyses will be necessary to routinely produce for the industry the direct genomic values with the highest accuracy. PMID:22122853

  6. The value of innovation under value-based pricing.

    PubMed

    Moreno, Santiago G; Ray, Joshua A

    2016-01-01

    The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the 'value of innovation' reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (<85%?), in order to then identify the drug price that satisfies this condition. We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share of this value, thereby incentivizing innovation, while supporting health-care systems to pursue dynamic allocative efficiency. However, the long-term sustainability of health-care systems must be assessed before this proposal is adopted by policy makers.

  7. The value of innovation under value-based pricing

    PubMed Central

    Moreno, Santiago G.; Ray, Joshua A.

    2016-01-01

    Objective The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the ‘value of innovation’ reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. Method The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). Results The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (<85%?), in order to then identify the drug price that satisfies this condition. Conclusion We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share of this value, thereby incentivizing innovation, while supporting health-care systems to pursue dynamic allocative efficiency. However, the long-term sustainability of health-care systems must be assessed before this proposal is adopted by policy makers. PMID:27123192

  8. Parameterization of deformed nuclei for Glauber modeling in relativistic heavy ion collisions

    DOE PAGES

    Sorensen, P.; Tang, A. H.; Videbaek, F.; ...

    2015-08-04

    In this study, the density distributions of large nuclei are typically modeled with a Woods–Saxon distribution characterized by a radius R 0 and skin depth a. Deformation parameters β are then introduced to describe non-spherical nuclei using an expansion in spherical harmonics R 0(1+β 2Y 2 0+β 4Y 4 0). But when a nucleus is non-spherical, the R 0 and a inferred from electron scattering experiments that integrate over all nuclear orientations cannot be used directly as the parameters in the Woods–Saxon distribution. In addition, the β 2 values typically derived from the reduced electric quadrupole transition probability B(E2)↑ aremore » not directly related to the β 2 values used in the spherical harmonic expansion. B(E2)↑ is more accurately related to the intrinsic quadrupole moment Q 0 than to β 2. One can however calculate Q 0 for a given β 2 and then derive B(E2)↑ from Q 0. In this paper we calculate and tabulate the R 0, a , and β 2 values that when used in a Woods–Saxon distribution, will give results consistent with electron scattering data. We then present calculations of the second and third harmonic participant eccentricity (ε 2 and ε 3) with the new and old parameters. We demonstrate that ε 3 is particularly sensitive to a and argue that using the incorrect value of a has important implications for the extraction of viscosity to entropy ratio (η/s) from the QGP created in Heavy Ion collisions.« less

  9. Phenological indicators derived with CO2 flux, MODIS image and ground monitor at a temperate mixed forest and an alpine shrub

    NASA Astrophysics Data System (ADS)

    Zhang, Leiming; Cao, Peiyu; Li, Shenggong; Yu, Guirui; Zhang, Junhui; Li, Yingnian

    2016-04-01

    To accurately assess the change of phenology and its relationship with ecosystem gross primary productivity (GPP) is one of the key issues in context of global change study. In this study, an alpine shrubland meadow in Haibei (HBS) of Qinghai-Tibetan plateau and a broad-leaved Korean pine forest in Changbai Mountain (CBM) of Northeastern China were selected. Based on the long-term GPP from eddy flux measurements and the Normalized Difference Vegetation Index (NDVI) from remote sensed vegetation index, phenological indicators including the start of growing season (SOS), the end of growing season (EOS), and the growing season length (GSL) since 2003 were derived via multiple methods, and then the influences of phenology variation on GPP were explored. Compared with ground phenology observations of dominant plant species, both GPP- and NDVI-derived SOS and EOS exhibited a similar interannual trend. GPP-derived SOS was quite close to NDVI-derived SOS, but GPP-derived EOS differed significantly from NDVI-derived EOS, and thus leading to a significant difference between GPP- and NDVI-derived GSL. Relative to SOS, EOS presented larger differences between the extraction methods, indicating large uncertainties to accurately define EOS. In general, among the methods used, the threshold methods produced more satisfactory assessment on phenology change. This study highlights that how to harmonize with the flux measurements, remote sensing and ground monitoring are a big challenge that needs further consideration in phenology study, especially the accurate extraction of EOS. Key words: phenological variation, carbon flux, vegetation index, vegetation grwoth, interannual varibility

  10. 75 FR 4883 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-29

    ... self-regulatory organization (``SRO'') to immediately list and trade a new derivative securities..., in order for the Commission to maintain an accurate record of all new derivative securities products... Commission, within five business days after the commencement of trading a new derivative securities product...

  11. Systematic considerations for a multicomponent pharmacokinetic study of Epimedii wushanensis herba: From method establishment to pharmacokinetic marker selection.

    PubMed

    Wang, Caihong; Wu, Caisheng; Zhang, Jinlan; Jin, Ying

    2015-04-15

    Prenylflavonoids are major active components of Epimedii wushanensis herba (EWH). The global pharmacokinetics of prenylflavonoids are unclear, as these compounds yield multiple, often unidentified metabolites. This study successfully elucidated the pharmacokinetic profiles of EWH extract and five EWH-derived prenylflavonoid monomers in rats. The study was a comprehensive analysis of metabolic pathways and pharmacokinetic markers. Major plasma compounds identified after oral administration of EWH-derived prototypes or extract included: (1) prenylflavonoid prototypes, (2) deglycosylated products, and (3) glucuronide conjugates. To select appropriate EWH-derived pharmacokinetic markers, a high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was established to simultaneously monitor 14 major compounds in unhydrolyzed plasma and 10 potential pharmacokinetic markers in hydrolyzed plasma. The pharmacokinetic profiles indicated that the glucuronide conjugates of icaritin were the principle circulating metabolites and that total icaritin accounted for ∼99% of prenylflavonoid exposure after administration of EWH-derived materials to rats. To further investigate icaritin as a prospective pharmacokinetic marker, correlation analysis was performed between total icaritin and its glucuronide conjugates, and a strong correlation (r > 0.5) was found, indicating that total icaritin content accurately reflected changes in the exposure levels of the glucuronide conjugates over time. Therefore, icaritin is a sufficient pharmacokinetic marker for evaluating dynamic prenylflavonoid exposure levels. Next, a mathematical model was developed based on the prenylflavonoid content of EWH and the exposure levels in rats, using icaritin as the pharmacokinetic marker. This model accurately predicted exposure levels in vivo, with similar predicted vs. experimental area under the curve (AUC)(0-96 h) values for total icaritin (24.1 vs. 32.0 mg/L h). Icaritin in hydrolyzed plasma can be used as a pharmacokinetic marker to reflect prenylflavonoid exposure levels, as well as the changes over time of its glucuronide conjugates. Crown Copyright © 2015. Published by Elsevier GmbH. All rights reserved.

  12. Development and evaluation of consensus-based sediment effect concentrations for polychlorinated biphenyls

    USGS Publications Warehouse

    MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.

    2000-01-01

    Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.

  13. Use of electronic health records to ascertain, validate and phenotype acute myocardial infarction: A systematic review and recommendations.

    PubMed

    Rubbo, Bruna; Fitzpatrick, Natalie K; Denaxas, Spiros; Daskalopoulou, Marina; Yu, Ning; Patel, Riyaz S; Hemingway, Harry

    2015-01-01

    Electronic health records (EHRs) offer the opportunity to ascertain clinical outcomes at large scale and low cost, thus facilitating cohort studies, quality of care research and clinical trials. For acute myocardial infarction (AMI) the extent to which different EHR sources are accessible and accurate remains uncertain. Using MEDLINE and EMBASE we identified thirty three studies, reporting a total of 128658 patients, published between January 2000 and July 2014 that permitted assessment of the validity of AMI diagnosis drawn from EHR sources against a reference such as manual chart review. In contrast to clinical practice, only one study used EHR-derived markers of myocardial necrosis to identify possible AMI cases, none used electrocardiogram findings and one used symptoms in the form of free text combined with coded diagnosis. The remaining studies relied mostly on coded diagnosis. Thirty one studies reported positive predictive value (PPV)≥ 70% between AMI diagnosis from both secondary care and primary care EHRs and the reference. Among fifteen studies reporting EHR-derived AMI phenotypes, three cross-referenced ST-segment elevation AMI diagnosis (PPV range 71-100%), two non-ST-segment elevation AMI (PPV 91.0, 92.1%), three non-fatal AMI (PPV range 82-92.2%) and six fatal AMI (PPV range 64-91.7%). Clinical coding of EHR-derived AMI diagnosis in primary care and secondary care was found to be accurate in different clinical settings and for different phenotypes. However, markers of myocardial necrosis, ECG and symptoms, the cornerstones of a clinical diagnosis, are underutilised and remain a challenge to retrieve from EHRs. Copyright © 2015. Published by Elsevier Ireland Ltd.

  14. Further developments in orbit ephemeris derived neutral density

    NASA Astrophysics Data System (ADS)

    Locke, Travis

    There are a number of non-conservative forces acting on a satellite in low Earth orbit. The one which is the most dominant and also contains the most uncertainty is atmospheric drag. Atmospheric drag is directly proportional to atmospheric density, and the existing atmospheric density models do not accurately model the variations in atmospheric density. In this research, precision orbit ephemerides (POE) are used as input measurements in an optimal orbit determination scheme in order to estimate corrections to existing atmospheric density models. These estimated corrections improve the estimates of the drag experienced by a satellite and therefore provide an improvement in orbit determination and prediction as well as a better overall understanding of the Earth's upper atmosphere. The optimal orbit determination scheme used in this work includes using POE data as measurements in a sequential filter/smoother process using the Orbit Determination Tool Kit (ODTK) software. The POE derived density estimates are validated by comparing them with the densities derived from accelerometers on board the Challenging Minisatellite Payload (CHAMP) and the Gravity Recovery and Climate Experiment (GRACE). These accelerometer derived density data sets for both CHAMP and GRACE are available from Sean Bruinsma of the Centre National d'Etudes Spatiales (CNES). The trend in the variation of atmospheric density is compared quantitatively by calculating the cross correlation (CC) between the POE derived density values and the accelerometer derived density values while the magnitudes of the two data sets are compared by calculating the root mean square (RMS) values between the two. There are certain high frequency density variations that are observed in the accelerometer derived density data but not in the POE derived density data or any of the baseline density models. These high frequency density variations are typically small in magnitude compared to the overall day-night variation. However during certain time periods, such as when the satellite is near the terminator, the variations are on the same order of magnitude as the diurnal variations. These variations can also be especially prevalent during geomagnetic storms and near the polar cusps. One of the goals of this work is to see what affect these unmodeled high frequency variations have on orbit propagation. In order to see this effect, the orbits of CHAMP and GRACE are propagated during certain time periods using different sources of density data as input measurements (accelerometer, POE, HASDM, and Jacchia 1971). The resulting orbit propagations are all compared to the propagation using the accelerometer derived density data which is used as truth. The RMS and the maximum difference between the different propagations are analyzed in order to see what effect the unmodeled density variations have on orbit propagation. These results are also binned by solar and geomagnetic activity level. The primary input into the orbit determination scheme used to produce the POE derived density estimates is a precision orbit ephemeris file. This file contains position and velocity in-formation for the satellite based on GPS and SLR measurements. The values contained in these files are estimated values and therefore contain some level of error, typically thought to be around the 5-10 cm level. The other primary focus of this work is to evaluate the effect of adding different levels of noise (0.1 m, 0.5 m, 1 m, 10 m, and 100 m) to this raw ephemeris data file before it is input into the orbit determination scheme. The resulting POE derived density estimates for each level of noise are then compared with the accelerometer derived densities by computing the CC and RMS values between the data sets. These results are also binned by solar and geomagnetic activity level.

  15. RNA Thermodynamic Structural Entropy

    PubMed Central

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner’99 and Turner’04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs. PMID:26555444

  16. RNA Thermodynamic Structural Entropy.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner'99 and Turner'04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs.

  17. Simultaneous Determination of Acetaminophen and Synthetic Color(s) by Derivative Spectroscopy in Syrup Formulations and Validation by HPLC: Exposure Risk of Colors to Children.

    PubMed

    Rastogi, Shanya Das; Dixit, Sumita; Tripathi, Anurag; Das, Mukul

    2015-06-01

    Color additives are used in pediatric syrup formulations as an excipient; though not pre-requisite, but pediatric syrup formulations are normally colored. An attempt has been made to measure simultaneously the single drug, acetaminophen (AT), along with the colors, carmoisine (CA), erythrosine (ET), and sunset yellow FCF (SSY) added in it by three derivative spectroscopy methods namely, 1st order, ratio, and differential derivative methods. Moreover, evaluation has been made for the exposure assessment of the colors added as excipient because some colors have been reported to cause allergic reactions and hypersensitivity in children. The present methods provide simple, accurate, and reproducible quantitative determination of the drug, AT, along with the color in synthetic mixtures and commercial drug formulations without any interference. The limit of detection varied from 0.0001-0.31 μg/ml while limit of quantification ranged from 0.002-1.04 μg/ml in all the three methods. The calibration curve of all the three derivative methods exhibited good linear relationship with excellent regression coefficients (0.9986-1.000). Both intra-day and inter-day precisions showed %RSD value less than 2% while the percentage recovery was found between 96.8-103.8%. The sensitivity of the proposed methods is almost comparable to HPLC and thus, can be used for determination of drug AT, and color simultaneously in pharmaceutical formulation on routine basis. The present methods also showed that colors like SSY and ET are saturating more than 50% of acceptable daily intake (ADI) value which is alarming and needs to be considered for modification by regulatory authorities to safeguard the health of children.

  18. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Kim, J. G.; Liu, H.

    2007-10-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.

  19. Rapid Identification of Flavonoid Constituents Directly from PTP1B Inhibitive Extract of Raspberry (Rubus idaeus L.) Leaves by HPLC–ESI–QTOF–MS-MS

    PubMed Central

    Li, Zhuan-Hong; Guo, Han; Xu, Wen-Bin; Ge, Juan; Li, Xin; Alimu, Mireguli; He, Da-Jun

    2016-01-01

    Many potential health benefits of raspberry (Rubus idaeus L.) leaves were attributed to polyphenolic compounds, especially flavonoids. In this study, the methanol extract of R. idaeus leaves showed significant protein tyrosine phosphatase-1B (PTP1B) inhibitory activity with IC50 value of 3.41 ± 0.01 µg mL−1. Meanwhile, a rapid and reliable method, employed high-performance liquid chromatography coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry, was established for structure identification of flavonoids from PTP1B inhibitive extract of R. idaeus leaves using accurate mass measurement and characteristic fragmentation patterns. A total of 16 flavonoids, including 4 quercetin derivatives, 2 luteolin derivatives, 8 kaempferol derivatives and 2 isorhamnetin derivatives, were identified. Compounds 3 and 4, Compounds 6 and 7 and Compounds 15 and 16 were isomers with different aglycones and different saccharides. Compounds 8, 9 and 10 were isomers with the same aglycone and the same saccharide but different substituent positions. Compounds 11 and 12 were isomers with the same aglycone but different saccharides. Compounds 2, 8, 9 and 10 possessed the same substituent saccharide of glycuronic acid. Most of them were reported in R. idaeus for the first time. PMID:26896347

  20. A one-step method for modelling longitudinal data with differential equations.

    PubMed

    Hu, Yueqin; Treinen, Raymond

    2018-04-06

    Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers' shopping behaviour after a sale promotion, and to a set of public data tracking participants' grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed. © 2018 The British Psychological Society.

  1. Baseline and target values for regional and point PV power forecasts: Toward improved solar forecasting

    DOE PAGES

    Zhang, Jie; Hodge, Bri -Mathias; Lu, Siyuan; ...

    2015-11-10

    Accurate solar photovoltaic (PV) power forecasting allows utilities to reliably utilize solar resources on their systems. However, to truly measure the improvements that any new solar forecasting methods provide, it is important to develop a methodology for determining baseline and target values for the accuracy of solar forecasting at different spatial and temporal scales. This paper aims at developing a framework to derive baseline and target values for a suite of generally applicable, value-based, and custom-designed solar forecasting metrics. The work was informed by close collaboration with utility and independent system operator partners. The baseline values are established based onmore » state-of-the-art numerical weather prediction models and persistence models in combination with a radiative transfer model. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of PV power output. The proposed reserve-based methodology is a reasonable and practical approach that can be used to assess the economic benefits gained from improvements in accuracy of solar forecasting. Lastly, the financial baseline and targets can be translated back to forecasting accuracy metrics and requirements, which will guide research on solar forecasting improvements toward the areas that are most beneficial to power systems operations.« less

  2. Simultaneous determination of mebeverine hydrochloride and chlordiazepoxide in their binary mixture using novel univariate spectrophotometric methods via different manipulation pathways.

    PubMed

    Lotfy, Hayam M; Fayez, Yasmin M; Michael, Adel M; Nessim, Christine K

    2016-02-15

    Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD(1)) or second derivative (D(2)). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28μg/mL for mebeverine hydrochloride and 1-12μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Simultaneous determination of mebeverine hydrochloride and chlordiazepoxide in their binary mixture using novel univariate spectrophotometric methods via different manipulation pathways

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Fayez, Yasmin M.; Michael, Adel M.; Nessim, Christine K.

    2016-02-01

    Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD1) or second derivative (D2). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28 μg/mL for mebeverine hydrochloride and 1-12 μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.

  4. Chlorophyll-a retrieval in the Philippine waters

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Leonardo, E. M.; Felix, M. J.

    2017-12-01

    Satellite-based monitoring of chlorophyll-a (Chl-a) concentration has been widely used for estimating plankton biomass, detecting harmful algal blooms, predicting pelagic fish abundance, and water quality assessment. Chl-a concentrations at 1 km spatial resolution can be retrieved from MODIS onboard Aqua and Terra satellites. However, with this resolution, MODIS has scarce Chl-a retrieval in coastal and inland waters, which are relevant for archipelagic countries such as the Philippines. These gaps on Chl-a retrieval can be filled by sensors with higher spatial resolution, such as the OLI of Landsat 8. In this study, assessment of Chl-a concentration derived from MODIS/Aqua and OLI/Landsat 8 imageries across the open, coastal and inland waters of the Philippines was done. Validation activities were conducted at eight different sites around the Philippines for the period October 2016 to April 2017. Water samples filtered on the field were processed in the laboratory for Chl-a extraction. In situ remote sensing reflectance was derived from radiometric measurements and ancillary information, such as bathymetry and turbidity, were also measured. Correlation between in situ and satellite-derived Chl-a concentration using the blue-green ratio yielded relatively high R2 values of 0.51 to 0.90. This is despite an observed overestimation for both MODIS and OLI-derived values, especially in turbid and coastal waters. The overestimation of Chl-a may be attributed to inaccuracies in i) remote sensing reflectance (Rrs) retrieval and/or ii) empirical model used in calculating Chl-a concentration. However, a good 1:1 correspondence between the satellite and in situ maximum Rrs band ratio was established. This implies that the overestimation is largely due to the inaccuracies from the default coefficients used in the empirical model. New coefficients were then derived from the correlation analysis of both in situ-measured Chl-a concentration and maximum Rrs band ratio. This results to a significant improvement on calculated RMSE of satellite-derived Chl-a values. Meanwhile, it was observed that the blue-green band ratio has low Chl-a predictive capability in turbid waters. A more accurate estimation was found using the NIR and red band ratios for turbid waters with covarying Chl-a concentration and low sediment load.

  5. 75 FR 16528 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-01

    ... self-regulatory organization (``SRO'') to immediately list and trade a new derivative securities..., in order for the Commission to maintain an accurate record of all new derivative securities products... Commission, within five business days after the commencement of trading a new derivative securities product...

  6. 77 FR 61044 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... derivatives markets, promoting the prompt and accurate clearance of transactions and protecting investors and... Proposed Rule Change To Comply With Revisions to CFTC Regulations Governing Derivatives Clearing... Trading Commission (``CFTC'') Regulations governing derivatives clearing organizations (``DCOs''). The...

  7. Oblique scattering from radially inhomogeneous dielectric cylinders: An exact Volterra integral equation formulation

    NASA Astrophysics Data System (ADS)

    Tsalamengas, John L.

    2018-07-01

    We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.

  8. A Critical Evaluation of the Thermophysical Properties of Mercury

    NASA Astrophysics Data System (ADS)

    Holman, G. J. F.; ten Seldam, C. A.

    1994-09-01

    For the use of a mercury column for precise pressure measurements—such as the pressurized 30 meter mercury-in-steel column used at the Van der Waals-Zeeman Laboratory for the calibration of piston gauges up to nearly 300 MPa—it is highly important to have accurate knowledge of such properties of mercury as density, isobaric secant and tangent volume thermal expansion coefficients, and isothermal secant and tangent compressibilities as functions of temperature and pressure. In this paper we present a critical assessment of the available information on these properties. Recommended values are given for the properties mentioned and, in addition, for properties derived from theses such as entropy, enthalpy, internal energy, and the specific heat capacities.

  9. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  10. Theory of warm ionized gases: equation of state and kinetic Schottky anomaly.

    PubMed

    Capolupo, A; Giampaolo, S M; Illuminati, F

    2013-10-01

    Based on accurate Lennard-Jones-type interaction potentials, we derive a closed set of state equations for the description of warm atomic gases in the presence of ionization processes. The specific heat is predicted to exhibit peaks in correspondence to single and multiple ionizations. Such kinetic analog in atomic gases of the Schottky anomaly in solids is enhanced at intermediate and low atomic densities. The case of adiabatic compression of noble gases is analyzed in detail and the implications on sonoluminescence are discussed. In particular, the predicted plasma electron density in a sonoluminescent bubble turns out to be in good agreement with the value measured in recent experiments.

  11. Applicability of the site fundamental frequency as a VS30 proxy for Central and Eastern North America

    NASA Astrophysics Data System (ADS)

    Hassani, B.; Atkinson, G. M.

    2015-12-01

    One of the most important issues in developing accurate ground-motion prediction equations (GMPEs) is the effective use of limited regional site information in developing a site effects model. In modern empirical GMPE models site effects are usually characterized by simplified parameters that describe the overall near-surface effects on input ground-motion shaking. The most common site effects parameter is the time-averaged shear-wave velocity in the upper 30 m (VS30), which has been used in the Next Generation Attenuation-West (NGA-West) and NGA-East GMPEs, and is widely used in building code applications. For the NGA-East GMPE database, only 6% of the stations have measured VS30 values, while the rest have proxy-based VS30 values. Proxy-based VS30 values are derived from a weighted average of different proxies' estimates such as topographic slope and surface geology proxies. For the proxy-based approaches, the uncertainty in the estimation of Vs30 is significantly higher (~0.25, log10 units) than that for stations with measured VS30(0.04, log10 units); this translates into error in site amplification and hence increased ground motion variability. We introduce a new VS30 proxy as a function of the site fundamental frequency (fpeak) using the NGA-East database, and show that fpeak is a particularly effective proxy for sites in central and eastern North America We first use horizontal to vertical spectra ratios (H/V) of 5%-damped pseudo spectral acceleration (PSA) to find the fpeak values for the recording stations. We develop an fpeak-based VS30 proxy by correlating the measured VS30 values with the corresponding fpeak value. The uncertainty of the VS30 estimate using the fpeak-based model is much lower (0.14, log10 units) than that for the proxy-based methods used in the NGA-East database (0.25 log10 units). The results of this study can be used to recalculate the VS30 values more accurately for stations with known fpeak values (23% of the stations), and potentially reduce the overall variability of the developed NGA-East GMPE models.

  12. SU-E-CAMPUS-I-05: Internal Dosimetric Calculations for Several Imaging Radiopharmaceuticals in Preclinical Studies and Quantitative Assessment of the Mouse Size Impact On Them. Realistic Monte Carlo Simulations Based On the 4D-MOBY Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostou, T; Papadimitroulas, P; Kagadis, GC

    2014-06-15

    Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PETmore » studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known that there is a large variety in the anatomy of the organs.« less

  13. An experimental investigation of wall-interference effects for parachutes in closed wind tunnels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.; Buffington, R.J.

    1989-09-01

    A set of 6-ft-diameter ribbon parachutes (geometric porosities of 7%, 15%, and 30%) was tested in various subsonic wind tunnels covering a range of geometric blockages from 2% to 35%. Drag, base pressure, and inflated geometry were measured under full-open, steady-flow conditions. The result drag areas and pressure coefficients were correlated with the bluff-body blockage parameter (i.e., drag area divided by tunnel cross-sectional area) according to the blockage theory of Maskell. The data show that the Maskell theory provides a simple, accurate correction for the effective increase in dynamic pressure caused by wall constraint for both single parachutes and clusters.more » For single parachutes, the empirically derived blockage factor K{sub M} has the value of 1.85, independent of canopy porosity. Derived values of K{sub M} for two- and three-parachute clusters are 1.35 and 1.59, respectively. Based on the photometric data, there was no deformation of the inflated shape of the single parachutes up to a geometric blockage of 22%. In the case of the three-parachute cluster, decreases in both the inflated diameter and the spacing among member parachutes were observed at a geometric blockage of 35%. 11 refs., 9 figs., 3 tabs.« less

  14. Diagnostic accuracy of urinary creatinine concentration in the estimation of differential renal function in patients with obstructive uropathy.

    PubMed

    Al-Hunayan, A; Al-Ateeqi, A; Kehinde, E O; Thalib, L; Loutfi, I; Mojiminiyi, O A

    2008-01-01

    To determine the diagnostic accuracy of spot urine creatinine concentration (UCC) as a new test for the evaluation of differential renal function in obstructed kidneys (DRF(ok)) drained by percutaneous nephrostomy tube (PCNT). In patients with obstructed kidneys drained by PCNT, DRF(ok) was derived from UCC by comparing the value of UCC in the obstructed kidney to the value in the contralateral kidney, and was derived from dimercaptosuccinic acid (DMSA) renal scans and creatinine clearance (CCr) using standard methods. Subsequently, the results of UCC were compared to the results of DMSA and CCr. 61 patients were enrolled. Bland-Altman plots to compare DMSA and UCC showed that the upper limit of agreement was 14.8% (95% CI 10.7-18.5) and the lower limit was -19.9% (95% CI -23.8 to -16.1). The sensitivity and specificity of detecting DMSA DRF(ok) < or = 35% using UCC was 85.2 and 91.2%, respectively. When UCC was compared to CCr, Bland-Altman tests gave an upper limit of agreement of 10.4% (95% CI 7.9-12.8) and a lower limit of agreement of -11.3% (95% CI -13.8 to -8.9). UCC is accurate in the estimation of DRF(ok) drained by PCNT. 2008 S. Karger AG, Basel

  15. Selection of Reference Genes for Quantitative Gene Expression in Porcine Mesenchymal Stem Cells Derived from Various Sources along with Differentiation into Multilineages

    PubMed Central

    Lee, Won-Jae; Jeon, Ryoung-Hoon; Jang, Si-Jung; Park, Ji-Sung; Lee, Seung-Chan; Baregundi Subbarao, Raghavendra; Lee, Sung-Lim; Park, Bong-Wook; King, William Allan; Rho, Gyu-Jin

    2015-01-01

    The identification of stable reference genes is a prerequisite for ensuring accurate validation of gene expression, yet too little is known about stable reference genes of porcine MSCs. The present study was, therefore, conducted to assess the stability of reference genes in porcine MSCs derived from bone marrow (BMSCs), adipose (AMSCs), and skin (SMSCs) with their in vitro differentiated cells into mesenchymal lineages such as adipocytes, osteocytes, and chondrocytes. Twelve commonly used reference genes were investigated for their threshold cycle (Ct) values by qRT-PCR. The Ct values of candidate reference genes were analyzed by geNorm software to clarify stable expression regardless of experimental conditions. Thus, Pearson's correlation was applied to determine correlation between the three most stable reference genes (NF3) and optimal number of reference genes (NFopt). In assessment of stability of reference gene across experimental conditions by geNorm analysis, undifferentiated MSCs and each differentiated status into mesenchymal lineages showed slightly different results but similar patterns about more or less stable rankings. Furthermore, Pearson's correlation revealed high correlation (r > 0.9) between NF3 and NFopt. Overall, the present study showed that HMBS, YWHAZ, SDHA, and TBP are suitable reference genes for qRT-PCR in porcine MSCs. PMID:25972899

  16. Theoretical study of stability and charge-transport properties of coronene molecule and some of its halogenated derivatives: A path to ambipolar organic-based materials?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sancho-García, J. C., E-mail: jc.sancho@ua.es; Pérez-Jiménez, A. J., E-mail: aj.perez@ua.es

    2014-10-07

    We have carefully investigated the structural and electronic properties of coronene and some of its fluorinated and chlorinated derivatives, including full periphery substitution, as well as the preferred orientation of the non-covalent dimer structures subsequently formed. We have paid particular attention to a set of methodological details, to first obtain single-molecule magnitudes as accurately as possible, including next the use of modern dispersion-corrected methods to tackle the corresponding non-covalently bound dimers. Generally speaking, this class of compounds is expected to self-assembly in neighboring π-stacks with dimer stabilization energies ranging from –20 to –30 kcal mol{sup −1} at close distances around 3.0–3.3more » Å. Then, in a further step, we have also calculated hole and electron transfer rates of some suitable candidates for ambipolar materials, and corresponding charge mobility values, which are known to critically depend on the supramolecular organization of the samples. For coronene and per-fluorinated coronene, we have found high values for their hopping rates, although slightly smaller for the latter due to an increase (decrease) of the reorganization energies (electronic couplings)« less

  17. A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications

    PubMed Central

    Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser

    2017-01-01

    In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service. PMID:28574471

  18. A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications.

    PubMed

    Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser

    2017-06-02

    In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service.

  19. Hartree-Fock theory of the inhomogeneous electron gas at a jellium metal surface: Rigorous upper bounds to the surface energy and accurate work functions

    NASA Astrophysics Data System (ADS)

    Sahni, V.; Ma, C. Q.

    1980-12-01

    The inhomogeneous electron gas at a jellium metal surface is studied in the Hartree-Fock approximation by Kohn-Sham density functional theory. Rigorous upper bounds to the surface energy are derived by application of the Rayleigh-Ritz variational principle for the energy, the surface kinetic, electrostatic, and nonlocal exchange energy functionals being determined exactly for the accurate linear-potential model electronic wave functions. The densities obtained by the energy minimization constraint are then employed to determine work-function results via the variationally accurate "displaced-profile change-in-self-consistent-field" expression. The theoretical basis of this non-self-consistent procedure and its demonstrated accuracy for the fully correlated system (as treated within the local-density approximation for exchange and correlation) leads us to conclude these results for the surface energies and work functions to be essentially exact. Work-function values are also determined by the Koopmans'-theorem expression, both for these densities as well as for those obtained by satisfaction of the constraint set on the electrostatic potential by the Budd-Vannimenus theorem. The use of the Hartree-Fock results in the accurate estimation of correlation-effect contributions to these surface properties of the nonuniform electron gas is also indicated. In addition, the original work and approximations made by Bardeen in this attempt at a solution of the Hartree-Fock problem are briefly reviewed in order to contrast with the present work.

  20. Precomputing upscaled hydraulic conductivity for complex geological structures

    NASA Astrophysics Data System (ADS)

    Mariethoz, G.; Jha, S. K.; George, M.; Maheswarajah, S.; John, V.; De Re, D.; Smith, M.

    2013-12-01

    3D geological models are built to capture the geological heterogeneity at a fine scale. However groundwater modellers are often interested in the hydraulic conductivity (K) values at a much coarser scale to reduce the numerical burden. Upscaling is used to assign conductivity to large volumes, which necessarily causes a loss of information. Recent literature has shown that the connectivity in the channelized structures is an important feature that needs to be taken into account for accurate upscaling. In this work we study the effect of channel parameters, e.g. width, sinuosity, connectivity etc. on the upscaled values of the hydraulic conductivity and the associated uncertainty. We devise a methodology that derives correspondences between a lithological description and the equivalent hydraulic conductivity at a larger scale. The method uses multiple-point geostatistics simulations (MPS) and parameterizes the 3D structures by introducing continuous rotation and affinity parameters. Additional statistical characterization is obtained by transition probabilities and connectivity measures. Equivalent hydraulic conductivity is then estimated by solving a flow problem for the entire heterogeneous domain by applying steady state flow in horizontal and vertical directions. This is systematically performed for many random realisations of the small scale structures to enable a probability distribution for the equivalent upscaled hydraulic conductivity. This process allows deriving systematic relationships between a given depositional environment and precomputed equivalent parameters. A modeller can then exploit the prior knowledge of the depositional environment and expected geological heterogeneity to bypass the step of generating small-scale models, and directly work with upscaled values.

  1. Matched Interface and Boundary Method for Elasticity Interface Problems

    PubMed Central

    Wang, Bao; Xia, Kelin; Wei, Guo-Wei

    2015-01-01

    Elasticity theory is an important component of continuum mechanics and has had widely spread applications in science and engineering. Material interfaces are ubiquity in nature and man-made devices, and often give rise to discontinuous coefficients in the governing elasticity equations. In this work, the matched interface and boundary (MIB) method is developed to address elasticity interface problems. Linear elasticity theory for both isotropic homogeneous and inhomogeneous media is employed. In our approach, Lamé’s parameters can have jumps across the interface and are allowed to be position dependent in modeling isotropic inhomogeneous material. Both strong discontinuity, i.e., discontinuous solution, and weak discontinuity, namely, discontinuous derivatives of the solution, are considered in the present study. In the proposed method, fictitious values are utilized so that the standard central finite different schemes can be employed regardless of the interface. Interface jump conditions are enforced on the interface, which in turn, accurately determines fictitious values. We design new MIB schemes to account for complex interface geometries. In particular, the cross derivatives in the elasticity equations are difficult to handle for complex interface geometries. We propose secondary fictitious values and construct geometry based interpolation schemes to overcome this difficulty. Numerous analytical examples are used to validate the accuracy, convergence and robustness of the present MIB method for elasticity interface problems with both small and large curvatures, strong and weak discontinuities, and constant and variable coefficients. Numerical tests indicate second order accuracy in both L∞ and L2 norms. PMID:25914439

  2. Derivation of ecological criteria for copper in land-applied biosolids and biosolid-amended agricultural soils.

    PubMed

    Lu, Tao; Li, Jumei; Wang, Xiaoqing; Ma, Yibing; Smolders, Erik; Zhu, Nanwen

    2016-12-01

    The difference in availability between soil metals added via biosolids and soluble salts was not taken into account in deriving the current land-applied biosolids standards. In the present study, a biosolids availability factor (BAF) approach was adopted to investigate the ecological thresholds for copper (Cu) in land-applied biosolids and biosolid-amended agricultural soils. First, the soil property-specific values of HC5 add (the added hazardous concentration for 5% of species) for Cu 2+ salt amended were collected with due attention to data for organisms and soils relevant to China. Second, a BAF representing the difference in availability between soil Cu added via biosolids and soluble salts was estimated based on long-term biosolid-amended soils, including soils from China. Third, biosolids Cu HC5 input values (the input hazardous concentration for 5% of species of Cu from biosolids to soil) as a function of soil properties were derived using the BAF approach. The average potential availability of Cu in agricultural soils amended with biosolids accounted for 53% of that for the same soils spiked with same amount of soluble Cu salts and with a similar aging time. The cation exchange capacity was the main factor affecting the biosolids Cu HC5 input values, while soil pH and organic carbon only explained 24.2 and 1.5% of the variation, respectively. The biosolids Cu HC5 input values can be accurately predicted by regression models developed based on 2-3 soil properties with coefficients of determination (R 2 ) of 0.889 and 0.945. Compared with model predicted biosolids Cu HC5 input values, current standards (GB4284-84) are most likely to be less protective in acidic and neutral soil, but conservative in alkaline non-calcareous soil. Recommendations on ecological criteria for Cu in land-applied biosolids and biosolid-amended agriculture soils may be helpful to fill the gaps existing between science and regulations, and can be useful for Cu risk assessments in soils amended with biosolids. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Anomalous Diffusion Measured by a Twice-Refocused Spin Echo Pulse Sequence: Analysis Using Fractional Order Calculus

    PubMed Central

    2011-01-01

    Purpose To theoretically develop and experimentally validate a formulism based on a fractional order calculus (FC) diffusion model to characterize anomalous diffusion in brain tissues measured with a twice-refocused spin-echo (TRSE) pulse sequence. Materials and Methods The FC diffusion model is the fractional order generalization of the Bloch-Torrey equation. Using this model, an analytical expression was derived to describe the diffusion-induced signal attenuation in a TRSE pulse sequence. To experimentally validate this expression, a set of diffusion-weighted (DW) images was acquired at 3 Tesla from healthy human brains using a TRSE sequence with twelve b-values ranging from 0 to 2,600 s/mm2. For comparison, DW images were also acquired using a Stejskal-Tanner diffusion gradient in a single-shot spin-echo echo planar sequence. For both datasets, a Levenberg-Marquardt fitting algorithm was used to extract three parameters: diffusion coefficient D, fractional order derivative in space β, and a spatial parameter μ (in units of μm). Using adjusted R-squared values and standard deviations, D, β and μ values and the goodness-of-fit in three specific regions of interest (ROI) in white matter, gray matter, and cerebrospinal fluid were evaluated for each of the two datasets. In addition, spatially resolved parametric maps were assessed qualitatively. Results The analytical expression for the TRSE sequence, derived from the FC diffusion model, accurately characterized the diffusion-induced signal loss in brain tissues at high b-values. In the selected ROIs, the goodness-of-fit and standard deviations for the TRSE dataset were comparable with the results obtained from the Stejskal-Tanner dataset, demonstrating the robustness of the FC model across multiple data acquisition strategies. Qualitatively, the D, β, and μ maps from the TRSE dataset exhibited fewer artifacts, reflecting the improved immunity to eddy currents. Conclusion The diffusion-induced signal attenuation in a TRSE pulse sequence can be described by an FC diffusion model at high b-values. This model performs equally well for data acquired from the human brain tissues with a TRSE pulse sequence or a conventional Stejskal-Tanner sequence. PMID:21509877

  4. Anomalous diffusion measured by a twice-refocused spin echo pulse sequence: analysis using fractional order calculus.

    PubMed

    Gao, Qing; Srinivasan, Girish; Magin, Richard L; Zhou, Xiaohong Joe

    2011-05-01

    To theoretically develop and experimentally validate a formulism based on a fractional order calculus (FC) diffusion model to characterize anomalous diffusion in brain tissues measured with a twice-refocused spin-echo (TRSE) pulse sequence. The FC diffusion model is the fractional order generalization of the Bloch-Torrey equation. Using this model, an analytical expression was derived to describe the diffusion-induced signal attenuation in a TRSE pulse sequence. To experimentally validate this expression, a set of diffusion-weighted (DW) images was acquired at 3 Tesla from healthy human brains using a TRSE sequence with twelve b-values ranging from 0 to 2600 s/mm(2). For comparison, DW images were also acquired using a Stejskal-Tanner diffusion gradient in a single-shot spin-echo echo planar sequence. For both datasets, a Levenberg-Marquardt fitting algorithm was used to extract three parameters: diffusion coefficient D, fractional order derivative in space β, and a spatial parameter μ (in units of μm). Using adjusted R-squared values and standard deviations, D, β, and μ values and the goodness-of-fit in three specific regions of interest (ROIs) in white matter, gray matter, and cerebrospinal fluid, respectively, were evaluated for each of the two datasets. In addition, spatially resolved parametric maps were assessed qualitatively. The analytical expression for the TRSE sequence, derived from the FC diffusion model, accurately characterized the diffusion-induced signal loss in brain tissues at high b-values. In the selected ROIs, the goodness-of-fit and standard deviations for the TRSE dataset were comparable with the results obtained from the Stejskal-Tanner dataset, demonstrating the robustness of the FC model across multiple data acquisition strategies. Qualitatively, the D, β, and μ maps from the TRSE dataset exhibited fewer artifacts, reflecting the improved immunity to eddy currents. The diffusion-induced signal attenuation in a TRSE pulse sequence can be described by an FC diffusion model at high b-values. This model performs equally well for data acquired from the human brain tissues with a TRSE pulse sequence or a conventional Stejskal-Tanner sequence. Copyright © 2011 Wiley-Liss, Inc.

  5. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  6. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  7. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  8. Effective optical constants of anisotropic materials

    NASA Technical Reports Server (NTRS)

    Aronson, J. R.; Emslie, A. G.

    1980-01-01

    The applicability of a technique for determining the optical constants of soil or aerosol components on the basis of measurements of the reflectance or transmittance of inhomogeneous samples of component material is investigated. Optical constants for a sample of very pure quartzite were obtained by a specular reflection technique and line parameters were calculated by classical dispersion theory. Predictions of the reflectance of powdered quartz were then derived from optical constants measured for the anisotropic quartz and for pure quartz crystals, and compared with experimental measurements. The calculated spectra are found to resemble each other moderately well in shape, however the reflectance level calculated from the psuedo-optical constants (quartzite) is consistently below that calculated from quartz values. The spectrum calculated from the quartz optical constants is also shown to represent the experimental nonrestrahlen features more accurately. It is thus concluded that although optical constants derived from inhomogeneous materials may represent the spectral features of a powdered sample qualitatively a quantitative fit to observed data is not likely.

  9. Piezo-optic tensor of crystals from quantum-mechanical calculations.

    PubMed

    Erba, A; Ruggiero, M T; Korter, T M; Dovesi, R

    2015-10-14

    An automated computational strategy is devised for the ab initio determination of the full fourth-rank piezo-optic tensor of crystals belonging to any space group of symmetry. Elastic stiffness and compliance constants are obtained as numerical first derivatives of analytical energy gradients with respect to the strain and photo-elastic constants as numerical derivatives of analytical dielectric tensor components, which are in turn computed through a Coupled-Perturbed-Hartree-Fock/Kohn-Sham approach, with respect to the strain. Both point and translation symmetries are exploited at all steps of the calculation, within the framework of periodic boundary conditions. The scheme is applied to the determination of the full set of ten symmetry-independent piezo-optic constants of calcium tungstate CaWO4, which have recently been experimentally reconstructed. Present calculations unambiguously determine the absolute sign (positive) of the π61 constant, confirm the reliability of 6 out of 10 experimentally determined constants and provide new, more accurate values for the remaining 4 constants.

  10. Piezo-optic tensor of crystals from quantum-mechanical calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erba, A., E-mail: alessandro.erba@unito.it; Dovesi, R.; Ruggiero, M. T.

    2015-10-14

    An automated computational strategy is devised for the ab initio determination of the full fourth-rank piezo-optic tensor of crystals belonging to any space group of symmetry. Elastic stiffness and compliance constants are obtained as numerical first derivatives of analytical energy gradients with respect to the strain and photo-elastic constants as numerical derivatives of analytical dielectric tensor components, which are in turn computed through a Coupled-Perturbed-Hartree-Fock/Kohn-Sham approach, with respect to the strain. Both point and translation symmetries are exploited at all steps of the calculation, within the framework of periodic boundary conditions. The scheme is applied to the determination of themore » full set of ten symmetry-independent piezo-optic constants of calcium tungstate CaWO{sub 4}, which have recently been experimentally reconstructed. Present calculations unambiguously determine the absolute sign (positive) of the π{sub 61} constant, confirm the reliability of 6 out of 10 experimentally determined constants and provide new, more accurate values for the remaining 4 constants.« less

  11. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    PubMed

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. High bioethanol titre from Manihot glaziovii through fed-batch simultaneous saccharification and fermentation in Automatic Gas Potential Test System.

    PubMed

    Moshi, Anselm P; Crespo, Carla F; Badshah, Malik; Hosea, Kenneth M M; Mshandete, Anthony Manoni; Mattiasson, Bo

    2014-03-01

    A process for the production of high bioethanol titre was established through fed-batch and simultaneous saccharification and fermentation (FB-SSF) of wild, non-edible cassava Manihot glaziovii. FB-SSF allowed fermentation of up to 390g/L of starch-derived glucose achieving high bioethanol concentration of up to 190g/L (24% v/v) with yields of around 94% of the theoretical value. The wild cassava M. glaziovii starch is hydrolysable with a low dosage of amylolytic enzymes (0.1-0.15% v/w, Termamyl® and AMG®). The Automatic Gas Potential Test System (AMPTS) was adapted to yeast ethanol fermentation and demonstrated to be an accurate, reliable and flexible device for studying the kinetics of yeast in SSF and FB-SSF. The bioethanol derived stoichiometrically from the CO2 registered in the AMPTS software correlated positively with samples analysed by HPLC (R(2)=0.99). Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  14. Validation of Leaf Area Index measurements based on the Wireless Sensor Network platform

    NASA Astrophysics Data System (ADS)

    Song, Q.; Li, X.; Liu, Q.

    2017-12-01

    The leaf area index (LAI) is one of the important parameters for estimating plant canopy function, which has significance for agricultural analysis such as crop yield estimation and disease evaluation. The quick and accurate access to acquire crop LAI is particularly vital. In the study, LAI measurement of corn crops is mainly through three kinds of methods: the leaf length and width method (LAILLW), the instruments indirect measurement method (LAII) and the leaf area index sensor method(LAIS). Among them, LAI value obtained from LAILLW can be regarded as approximate true value. LAI-2200,the current widespread LAI canopy analyzer,is used in LAII. LAIS based on wireless sensor network can realize the automatic acquisition of crop images,simplifying the data collection work,while the other two methods need person to carry out field measurements.Through the comparison of LAIS and other two methods, the validity and reliability of LAIS observation system is verified. It is found that LAI trend changes are similar in three methods, and the rate of change of LAI has an increase with time in the first two months of corn growth when LAIS costs less manpower, energy and time. LAI derived from LAIS is more accurate than LAII in the early growth stage,due to the small blade especially under the strong light. Besides, LAI processed from a false color image with near infrared information is much closer to the true value than true color picture after the corn growth period up to one and half months.

  15. 77 FR 41207 - Self-Regulatory Organizations; Chicago Mercantile Exchange, Inc.; Notice of Filing and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-12

    ... as promoting market transparency for over-the-counter derivatives markets, promoting the prompt and... and accurate clearance and settlement of securities transactions and derivatives agreements, contracts... activities pursuant to its registration as a derivatives clearing organization under the Commodity Exchange...

  16. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  17. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    PubMed

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  18. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  19. Charge and energy dependence of the residence time of cosmic ray nuclei below 15 GeV/nucleon

    NASA Technical Reports Server (NTRS)

    Soutoul, A.; Engelmann, J. J.; Ferrando, P.; Koch-Miramond, L.; Masse, P.; Webber, W. R.

    1985-01-01

    The relative abundance of nuclear species measured in cosmic rays at Earth has often been interpreted with the simple leaky box model. For this model to be consistent an essential requirement is that the escape length does not depend on the nuclear species. The discrepancy between escape length values derived from iron secondaries and from the B/C ratio was identified by Garcia-Munoz and his co-workers using a large amount of experimental data. Ormes and Protheroe found a similar trend in the HEAO data although they questioned its significance against uncertainties. They also showed that the change in the B/C ratio values implies a decrease of the residence time of cosmic rays at low energies in conflict with the diffusive convective picture. These conclusions crucially depend on the partial cross section values and their uncertainties. Recently new accurate cross sections of key importance for propagation calculations have been measured. Their statistical uncertainties are often better than 4% and their values significantly different from those previously accepted. Here, these new cross sections are used to compare the observed B/C+O and (Sc to Cr)/Fe ratio to those predicted with the simple leaky box model.

  20. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  1. Quantitative confirmation of diffusion-limited oxidation theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Clough, R.L.

    1990-01-01

    Diffusion-limited (heterogeneous) oxidation effects are often important for studies of polymer degradation. Such effects are common in polymers subjected to ionizing radiation at relatively high dose rate. To better understand the underlying oxidation processes and to aid in the planning of accelerated aging studies, it would be desirable to be able to monitor and quantitatively understand these effects. In this paper, we briefly review a theoretical diffusion approach which derives model profiles for oxygen surrounded sheets of material by combining oxygen permeation rates with kinetically based oxygen consumption expressions. The theory leads to a simple governing expression involving the oxygenmore » consumption and permeation rates together with two model parameters {alpha} and {beta}. To test the theory, gamma-initiated oxidation of a sheet of commercially formulated EPDM rubber was performed under conditions which led to diffusion-limited oxidation. Profile shapes from the theoretical treatments are shown to accurately fit experimentally derived oxidation profiles. In addition, direct measurements on the same EPDM material of the oxygen consumption and permeation rates, together with values of {alpha} and {beta} derived from the fitting procedure, allow us to quantitatively confirm for the first time the governing theoretical relationship. 17 refs., 3 figs.« less

  2. The initial value problem in Lagrangian drift kinetic theory

    NASA Astrophysics Data System (ADS)

    Burby, J. W.

    2016-06-01

    > Existing high-order variational drift kinetic theories contain unphysical rapidly varying modes that are not seen at low orders. These unphysical modes, which may be rapidly oscillating, damped or growing, are ushered in by a failure of conventional high-order drift kinetic theory to preserve the structure of its parent model's initial value problem. In short, the (infinite dimensional) system phase space is unphysically enlarged in conventional high-order variational drift kinetic theory. I present an alternative, `renormalized' variational approach to drift kinetic theory that manifestly respects the parent model's initial value problem. The basic philosophy underlying this alternate approach is that high-order drift kinetic theory ought to be derived by truncating the all-orders system phase-space Lagrangian instead of the usual `field particle' Lagrangian. For the sake of clarity, this story is told first through the lens of a finite-dimensional toy model of high-order variational drift kinetics; the analogous full-on drift kinetic story is discussed subsequently. The renormalized drift kinetic system, while variational and just as formally accurate as conventional formulations, does not support the troublesome rapidly varying modes.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sileghem, L.; Wallner, T.; Verhelst, S.

    As knock is one of the main factors limiting the efficiency of spark-ignition engines, the introduction of alcohol blends could help to mitigate knock concerns due to the elevated knock resistance of these blends. A model that can accurately predict their autoignition behavior would be of great value to engine designers. The current work aims to develop such a model for alcohol–gasoline blends. First, a mixing rule for the autoignition delay time of alcohol–gasoline blends is proposed. Subsequently, this mixing rule is used together with an autoignition delay time correlation of gasoline and an autoignition delay time cor-relation of methanolmore » in a knock integral model that is implemented in a two-zone engine code. The pre-dictive performance of the resulting model is validated through comparison against experimental measurements on a CFR engine for a range of gasoline–methanol blends. The knock limited spark advance, the knock intensity, the knock onset crank angle and the value of the knock integral at the experimental knock onset have been simulated and compared to the experimental values derived from in-cylinder pressure measurements.« less

  4. NSRD-15:Computational Capability to Substantiate DOE-HDBK-3010 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louie, David; Bignell, John; Dingreville, Remi Philippe Michel

    Safety basis analysts throughout the U.S. Department of Energy (DOE) complex rely heavily on the information provided in the DOE Handbook, DOE-HDBK-3010, Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities, to determine radionuclide source terms from postulated accident scenarios. In calculating source terms, analysts tend to use the DOE Handbook’s bounding values on airborne release fractions (ARFs) and respirable fractions (RFs) for various categories of insults (representing potential accident release categories). This is typically due to both time constraints and the avoidance of regulatory critique. Unfortunately, these bounding ARFs/RFs represent extremely conservative values. Moreover, they were derived frommore » very limited small-scale bench/laboratory experiments and/or from engineered judgment. Thus, the basis for the data may not be representative of the actual unique accident conditions and configurations being evaluated. The goal of this research is to develop a more accurate and defensible method to determine bounding values for the DOE Handbook using state-of-art multi-physics-based computer codes.« less

  5. Accurate computation and continuation of homoclinic and heteroclinic orbits for singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Vaughan, William W.; Friedman, Mark J.; Monteiro, Anand C.

    1993-01-01

    In earlier papers, Doedel and the authors have developed a numerical method and derived error estimates for the computation of branches of heteroclinic orbits for a system of autonomous ordinary differential equations in R(exp n). The idea of the method is to reduce a boundary value problem on the real line to a boundary value problem on a finite interval by using a local (linear or higher order) approximation of the stable and unstable manifolds. A practical limitation for the computation of homoclinic and heteroclinic orbits has been the difficulty in obtaining starting orbits. Typically these were obtained from a closed form solution or via a homotopy from a known solution. Here we consider extensions of our algorithm which allow us to obtain starting orbits on the continuation branch in a more systematic way as well as make the continuation algorithm more flexible. In applications, we use the continuation software package AUTO in combination with some initial value software. The examples considered include computation of homoclinic orbits in a singular perturbation problem and in a turbulent fluid boundary layer in the wall region problem.

  6. Determination of Ionospheric Total Electron Content Derived from Gnss Measurements

    NASA Astrophysics Data System (ADS)

    Inyurt, S.; Mekik, C.; Yildirim, O.

    2014-12-01

    Global Navigation Satellite System (GNSS) has been used in numerous fields especially related to satellite- based radio navigation system for a long time. Ionosphere, one of the upper atmosphere layers ranges from 60 km to 1500 km, is a dispersive medium and it includes a number of free electrons and ions. The ionization is mainly subject to the sun and its activity. Ionospheric activity depends also on seasonal, diurnal variations and geographical location. Total Electron Content (TEC), which is also called Slant Total Electron Content (STEC), is a parameter that changes according to ionospheric conditions and has highly variable structure. Furthermore, Vertical TEC (VTEC) can be explained as TEC value in the direction of zenith. Thanks to VTEC, TEC values can be modelled. TEC is measured in units of TECU and 1TECU= 1016 electrons/m2. Ionospheric modelling has a great importance for improving the accuracies of positioning and understanding the ionosphere. Thus, various models have been developed to detect TEC value in the last years. Single Layer Model (SLM) which provides determining TEC value and GPS positioning in the ionosphere accurately is one of the most commonly used models. SLM assumes that all free electrons are concentrated in a shell of infinitesimal thickness. In this paper SLM model was used to derive TEC values by means of Bernese 5.0 program developed by the University of Bern, Sweden. In this study, we have used regional ionosphere model to derive TEC value. First of all, GPS data have been collected from 10 stations in Turkey and 13 IGS stations for 7 days from 06.03.2010 to 12.03.2010. Then, Regional Ionosphere Model (RIM) is created with the reference of the GPS data. At the end of the process, the result files are stored as IONEX format. TEC results for those days are obtained with two hours interval. TEC variation related to the research area ranges from nearly 6 TECU to approximately 20 TECU. The obtained results show that TEC values start increasing until mid-days and reach peak value at 12:00 UT. After 12:00 UT it begins decreasing gradually towards night because of recombination of the ions. As a result, SLM is an effective model for mapping TEC values and determination of TEC variation can be used to identify many studies such as precursor of earthquakes, volcanic eruptions and launching site determination etc.

  7. 78 FR 18377 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...) permits a self-regulatory organization (``SRO'') to list and trade a new derivative securities product... Commission to maintain an accurate record of all new derivative securities products traded on the SROs, Rule... begins trading a new derivative securities product that is not required to be submitted as a proposed...

  8. 78 FR 56261 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-12

    ... market transparency for over-the-counter derivatives markets, promoting the prompt and accurate clearance... are limited to its business as a derivatives clearing organization. More specifically, the proposed... is registered as a derivatives clearing organization with the Commodity Futures Trading Commission...

  9. 78 FR 77512 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-23

    ... market transparency for over-the-counter derivatives markets, promoting the prompt and accurate clearance... derivatives clearing organization. More specifically, the proposed rule change would modify the fee schedule... as a derivatives clearing organization with the Commodity Futures Trading Commission and currently...

  10. 76 FR 65224 - Self-Regulatory Organizations; Chicago Mercantile Exchange, Inc.; Notice of Filing and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ..., such as promoting market transparency for over-the-counter derivatives markets, promoting the prompt... registration as a derivatives clearing organization under the Commodity Exchange Act (``CEA'') and do not... accurate clearance and settlement of derivative agreements, contracts, and transactions because it should...

  11. 78 FR 52228 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-22

    ... derivatives markets, promoting the prompt and accurate clearance of transactions and protecting investors and... changes that are limited to its business as a derivatives clearing organization. More specifically, the... derivatives clearing organization with the Commodity Futures Trading Commission and currently offers clearing...

  12. 78 FR 14866 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... derivatives markets, promoting the prompt and accurate clearance of transactions and protecting investors and... Change CME proposes to amend rules related to its business as a derivatives clearing organization... as a derivatives clearing organization with the Commodity Futures Trading Commission (``CFTC'') and...

  13. 78 FR 64038 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-25

    ... transparency for over-the-counter derivatives markets, promoting the prompt and accurate clearance of... a derivatives clearing organization. The new CME rule simply specifies that CME will discharge any... Proposed Rule Change CME is registered as a derivatives clearing organization (``DCO'') with the Commodity...

  14. Determination of zenith hydrostatic delay and its impact on GNSS-derived integrated water vapor

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoming; Zhang, Kefei; Wu, Suqin; He, Changyong; Cheng, Yingyan; Li, Xingxing

    2017-08-01

    Surface pressure is a necessary meteorological variable for the accurate determination of integrated water vapor (IWV) using Global Navigation Satellite System (GNSS). The lack of pressure observations is a big issue for the conversion of historical GNSS observations, which is a relatively new area of GNSS applications in climatology. Hence the use of the surface pressure derived from either a blind model (e.g., Global Pressure and Temperature 2 wet, GPT2w) or a global atmospheric reanalysis (e.g., ERA-Interim) becomes an important alternative solution. In this study, pressure derived from these two methods is compared against the pressure observed at 108 global GNSS stations at four epochs (00:00, 06:00, 12:00 and 18:00 UTC) each day for the period 2000-2013. Results show that a good accuracy is achieved from the GPT2w-derived pressure in the latitude band between -30 and 30° and the average value of 6 h root-mean-square errors (RMSEs) across all the stations in this region is 2.5 hPa. Correspondingly, an error of 5.8 mm and 0.9 kg m-2 in its resultant zenith hydrostatic delay (ZHD) and IWV is expected. However, for the stations located in the mid-latitude bands between -30 and -60° and between 30 and 60°, the mean value of the RMSEs is 7.3 hPa, and for the stations located in the high-latitude bands from -60 to -90° and from 60 to 90°, the mean value of the RMSEs is 9.9 hPa. The mean of the RMSEs of the ERA-Interim-derived pressure across at the selected 100 stations is 0.9 hPa, which will lead to an equivalent error of 2.1 mm and 0.3 kg m-2 in the ZHD and IWV, respectively, determined from this ERA-Interim-derived pressure. Results also show that the monthly IWV determined using pressure from ERA-Interim has a good accuracy - with a relative error of better than 3 % on a global scale; thus, the monthly IWV resulting from ERA-Interim-derived pressure has the potential to be used for climate studies, whilst the monthly IWV resulting from GPT2w-derived pressure has a relative error of 6.7 % in the mid-latitude regions and even reaches 20.8 % in the high-latitude regions. The comparison between GPT2w and seasonal models of pressure-ZHD derived from ERA-Interim and pressure observations indicates that GPT2w captures the seasonal variations in pressure-ZHD very well.

  15. A comparison of five methods to predict genomic breeding values of dairy bulls from genome-wide SNP markers

    PubMed Central

    2009-01-01

    Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835

  16. Effects of quantum noise in 4D-CT on deformable image registration and derived ventilation data

    NASA Astrophysics Data System (ADS)

    Latifi, Kujtim; Huang, Tzung-Chi; Feygelman, Vladimir; Budzevich, Mikalai M.; Moros, Eduardo G.; Dilling, Thomas J.; Stevens, Craig W.; van Elmpt, Wouter; Dekker, Andre; Zhang, Geoffrey G.

    2013-11-01

    Quantum noise is common in CT images and is a persistent problem in accurate ventilation imaging using 4D-CT and deformable image registration (DIR). This study focuses on the effects of noise in 4D-CT on DIR and thereby derived ventilation data. A total of six sets of 4D-CT data with landmarks delineated in different phases, called point-validated pixel-based breathing thorax models (POPI), were used in this study. The DIR algorithms, including diffeomorphic morphons (DM), diffeomorphic demons (DD), optical flow and B-spline, were used to register the inspiration phase to the expiration phase. The DIR deformation matrices (DIRDM) were used to map the landmarks. Target registration errors (TRE) were calculated as the distance errors between the delineated and the mapped landmarks. Noise of Gaussian distribution with different standard deviations (SD), from 0 to 200 Hounsfield Units (HU) in amplitude, was added to the POPI models to simulate different levels of quantum noise. Ventilation data were calculated using the ΔV algorithm which calculates the volume change geometrically based on the DIRDM. The ventilation images with different added noise levels were compared using Dice similarity coefficient (DSC). The root mean square (RMS) values of the landmark TRE over the six POPI models for the four DIR algorithms were stable when the noise level was low (SD <150 HU) and increased with added noise when the level is higher. The most accurate DIR was DD with a mean RMS of 1.5 ± 0.5 mm with no added noise and 1.8 ± 0.5 mm with noise (SD = 200 HU). The DSC values between the ventilation images with and without added noise decreased with the noise level, even when the noise level was relatively low. The DIR algorithm most robust with respect to noise was DM, with mean DSC = 0.89 ± 0.01 and 0.66 ± 0.02 for the top 50% ventilation volumes, as compared between 0 added noise and SD = 30 and 200 HU, respectively. Although the landmark TRE were stable with low noise, the differences between ventilation images increased with noise level, even when the noise was low, indicating ventilation imaging from 4D-CT was sensitive to image noise. Therefore, high quality 4D-CT is essential for accurate ventilation images.

  17. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  18. An assessment of Li abundances in weak-lined and classical T Tauri stars of the Taurus-Auriga association

    NASA Astrophysics Data System (ADS)

    Sestito, P.; Palla, F.; Randich, S.

    2008-09-01

    Context: Accurate measurements of lithium abundances in young low-mass stars provide an independent and reliable age diagnostics. Previous studies of nearby star forming regions have identified significant numbers of Li-depleted stars, often at levels inconsistent with the ages indicated by their luminosity. Aims: We aim at a new and accurate analysis of Li abundances in a sample of ~100 pre-main sequence stars in Taurus-Auriga using a homogeneous and updated set of stellar parameters and model atmospheres appropriate for the spectral types of the sample stars. Methods: We compute Li abundances using published values of the equivalent widths of the Li λ6708 Å doublet obtained from medium/high resolution spectra. Results: We find that the number of significantly Li-depleted stars in Taurus-Auriga is greatly reduced with respect to earlier results. Only 13 stars have abundances lower than the interstellar value by a factor of 5 or greater. All of them are weak-lined T Tauri stars drawn from X-ray surveys; with the exception of four stars located near the L1551 and L1489 dark clouds, all the Li-depleted stars belong to the class of dispersed low-mass stars, distributed around the main sites of current star formation. If located at the distance of Taurus-Auriga, the stellar ages implied by the derived Li abundances are in the range 3-30 Myr, greater than the bulk of the Li-rich population with implication on the star formation history of the region. Conclusions: In order to derive firm conclusions about the fraction of Li-depleted stars of Taurus-Auriga, Li measurements of the remaining members of the association should be obtained, in particular of the group of stars that fall in the Li-burning region of the HR diagram. Table [see full text] is only available in electronic form at http://www.aanda.org

  19. Continuous day-time time series of E-region equatorial electric fields derived from ground magnetic observatory data

    NASA Astrophysics Data System (ADS)

    Alken, P.; Chulliat, A.; Maus, S.

    2012-12-01

    The day-time eastward equatorial electric field (EEF) in the ionospheric E-region plays an important role in equatorial ionospheric dynamics. It is responsible for driving the equatorial electrojet (EEJ) current system, equatorial vertical ion drifts, and the equatorial ionization anomaly (EIA). Due to its importance, there is much interest in accurately measuring and modeling the EEF. However, there are limited sources of direct EEF measurements with full temporal and spatial coverage of the equatorial ionosphere. In this work, we propose a method of estimating a continuous day-time time series of the EEF at any longitude, provided there is a pair of ground magnetic observatories in the region which can accurately track changes in the strength of the EEJ. First, we derive a climatological unit latitudinal current profile from direct overflights of the CHAMP satellite and use delta H measurements from the ground observatory pair to determine the magnitude of the current. The time series of current profiles is then inverted for the EEF by solving the governing electrodynamic equations. While this method has previously been applied and validated in the Peruvian sector, in this work we demonstrate the method using a pair of magnetometers in Africa (Samogossoni, SAM, 0.18 degrees magnetic latitude and Tamanrasset, TAM, 11.5 degrees magnetic latitude) and validate the resulting EEF values against the CINDI ion velocity meter (IVM) instrument on the C/NOFS satellite. We find a very good 80% correlation with C/NOFS IVM measurements and a root-mean-square difference of 9 m/s in vertical drift velocity. This technique can be extended to any pair of ground observatories which can capture the day-time strength of the EEJ. We plan to apply this work to more observatory pairs around the globe and distribute real-time equatorial electric field values to the community.

  20. Parametrization and calibration of a quasi-analytical algorithm for tropical eutrophic waters

    NASA Astrophysics Data System (ADS)

    Watanabe, Fernanda; Mishra, Deepak R.; Astuti, Ike; Rodrigues, Thanan; Alcântara, Enner; Imai, Nilton N.; Barbosa, Cláudio

    2016-11-01

    Quasi-analytical algorithm (QAA) was designed to derive the inherent optical properties (IOPs) of water bodies from above-surface remote sensing reflectance (Rrs). Several variants of QAA have been developed for environments with different bio-optical characteristics. However, most variants of QAA suffer from moderate to high negative IOP prediction when applied to tropical eutrophic waters. This research is aimed at parametrizing a QAA for tropical eutrophic water dominated by cyanobacteria. The alterations proposed in the algorithm yielded accurate absorption coefficients and chlorophyll-a (Chl-a) concentration. The main changes accomplished were the selection of wavelengths representative of the optically relevant constituents (ORCs) and calibration of values directly associated with the pigments and detritus plus colored dissolved organic material (CDM) absorption coefficients. The re-parametrized QAA eliminated the retrieval of negative values, commonly identified in other variants of QAA. The calibrated model generated a normalized root mean square error (NRMSE) of 21.88% and a mean absolute percentage error (MAPE) of 28.27% for at(λ), where the largest errors were found at 412 nm and 620 nm. Estimated NRMSE for aCDM(λ) was 18.86% with a MAPE of 31.17%. A NRMSE of 22.94% and a MAPE of 60.08% were obtained for aφ(λ). Estimated aφ(665) and aφ(709) was used to predict Chl-a concentration. aφ(665) derived from QAA for Barra Bonita Hydroelectric Reservoir (QAA_BBHR) was able to predict Chl-a accurately, with a NRMSE of 11.3% and MAPE of 38.5%. The performance of the Chl-a model was comparable to some of the most widely used empirical algorithms such as 2-band, 3-band, and the normalized difference chlorophyll index (NDCI). The new QAA was parametrized based on the band configuration of MEdium Resolution Imaging Spectrometer (MERIS), Sentinel-2A and 3A and can be readily scaled-up for spatio-temporal monitoring of IOPs in tropical waters.

  1. An improved 2D MoF method by using high order derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2017-11-01

    The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.

  2. Forest biomass change estimated from height change in interferometric SAR height models.

    PubMed

    Solberg, Svein; Næsset, Erik; Gobakken, Terje; Bollandsås, Ole-Martin

    2014-12-01

    There is a need for new satellite remote sensing methods for monitoring tropical forest carbon stocks. Advanced RADAR instruments on board satellites can contribute with novel methods. RADARs can see through clouds, and furthermore, by applying stereo RADAR imaging we can measure forest height and its changes. Such height changes are related to carbon stock changes in the biomass. We here apply data from the current Tandem-X satellite mission, where two RADAR equipped satellites go in close formation providing stereo imaging. We combine that with similar data acquired with one of the space shuttles in the year 2000, i.e. the so-called SRTM mission. We derive height information from a RADAR image pair using a method called interferometry. We demonstrate an approach for REDD based on interferometry data from a boreal forest in Norway. We fitted a model to the data where above-ground biomass in the forest increases with 15 t/ha for every m increase of the height of the RADAR echo. When the RADAR echo is at the ground the estimated biomass is zero, and when it is 20 m above the ground the estimated above-ground biomass is 300 t/ha. Using this model we obtained fairly accurate estimates of biomass changes from 2000 to 2011. For 200 m 2 plots we obtained an accuracy of 65 t/ha, which corresponds to 50% of the mean above-ground biomass value. We also demonstrate that this method can be applied without having accurate terrain heights and without having former in-situ biomass data, both of which are generally lacking in tropical countries. The gain in accuracy was marginal when we included such data in the estimation. Finally, we demonstrate that logging and other biomass changes can be accurately mapped. A biomass change map based on interferometry corresponded well to a very accurate map derived from repeated scanning with airborne laser. Satellite based, stereo imaging with advanced RADAR instruments appears to be a promising method for REDD. Interferometric processing of the RADAR data provides maps of forest height changes from which we can estimate temporal changes in biomass and carbon.

  3. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  4. Side reactions of nitroxide-mediated polymerization: N-O versus O-C cleavage of alkoxyamines.

    PubMed

    Hodgson, Jennifer L; Roskop, Luke B; Gordon, Mark S; Lin, Ching Yeh; Coote, Michelle L

    2010-09-30

    Free energies for the homolysis of the NO-C and N-OC bonds were compared for a large number of alkoxyamines at 298 and 393 K, both in the gas phase and in toluene solution. On this basis, the scope of the N-OC homolysis side reaction in nitroxide-mediated polymerization was determined. It was found that the free energies of NO-C and N-OC homolysis are not correlated, with NO-C homolysis being more dependent upon the properties of the alkyl fragment and N-OC homolysis being more dependent upon the structure of the aminyl fragment. Acyclic alkoxyamines and those bearing the indoline functionality have lower free energies of N-OC homolysis than other cyclic alkoxyamines, with the five-membered pyrrolidine and isoindoline derivatives showing lower free energies than the six-membered piperidine derivatives. For most nitroxides, N-OC homolysis is normally favored above NO-C homolysis only when a heteroatom that is α to the NOC carbon center stabilizes the NO-C bond and/or the released alkyl radical is not sufficiently stabilized. As part of this work, accurate methods for the calculation of free energies for the homolysis of alkoxyamines were determined. Accurate thermodynamic parameters to within 4.5 kJ mol(-1) of experimental values were found using an ONIOM approximation to G3(MP2)-RAD combined with PCM solvation energies at the B3-LYP/6-31G(d) level.

  5. Technical Note: On the calculation of stopping-power ratio for stoichiometric calibration in proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ödén, Jakob; Zimmerman, Jens; Nowik, Patrik

    2015-09-15

    Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simplemore » Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni’s or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni’s tables.« less

  6. The effect of spatial resolution upon cloud optical property retrievals. I - Optical thickness

    NASA Technical Reports Server (NTRS)

    Feind, Rand E.; Christopher, Sundar A.; Welch, Ronald M.

    1992-01-01

    High spectral and spatial resolution Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery is used to study the effects of spatial resolution upon fair weather cumulus cloud optical thickness retrievals. As a preprocessing step, a variation of the Gao and Goetz three-band ratio technique is used to discriminate clouds from the background. The combination of the elimination of cloud shadow pixels and using the first derivative of the histogram allows for accurate cloud edge discrimination. The data are progressively degraded from 20 m to 960 m spatial resolution. The results show that retrieved cloud area increases with decreasing spatial resolution. The results also show that there is a monotonic decrease in retrieved cloud optical thickness with decreasing spatial resolution. It is also demonstrated that the use of a single, monospectral reflectance threshold is inadequate for identifying cloud pixels in fair weather cumulus scenes and presumably in any inhomogeneous cloud field. Cloud edges have a distribution of reflectance thresholds. The incorrect identification of cloud edges significantly impacts the accurate retrieval of cloud optical thickness values.

  7. Using iRT, a normalized retention time for more targeted measurement of peptides.

    PubMed

    Escher, Claudia; Reiter, Lukas; MacLean, Brendan; Ossola, Reto; Herzog, Franz; Chilton, John; MacCoss, Michael J; Rinner, Oliver

    2012-04-01

    Multiple reaction monitoring (MRM) has recently become the method of choice for targeted quantitative measurement of proteins using mass spectrometry. The method, however, is limited in the number of peptides that can be measured in one run. This number can be markedly increased by scheduling the acquisition if the accurate retention time (RT) of each peptide is known. Here we present iRT, an empirically derived dimensionless peptide-specific value that allows for highly accurate RT prediction. The iRT of a peptide is a fixed number relative to a standard set of reference iRT-peptides that can be transferred across laboratories and chromatographic systems. We show that iRT facilitates the setup of multiplexed experiments with acquisition windows more than four times smaller compared to in silico RT predictions resulting in improved quantification accuracy. iRTs can be determined by any laboratory and shared transparently. The iRT concept has been implemented in Skyline, the most widely used software for MRM experiments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ianjamasimanana, R.; Blok, W. J. G. de; Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by Themore » H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.« less

  9. Comparative Validation of the Determination of Sofosbuvir in Pharmaceuticals by Several Inexpensive Ecofriendly Chromatographic, Electrophoretic, and Spectrophotometric Methods.

    PubMed

    El-Yazbi, Amira F

    2017-01-20

    Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virusinfection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with <em>P</em>-value &#x003E; 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.

  10. High-resolution schemes for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Harten, A.

    1982-01-01

    A class of new explicit second order accurate finite difference schemes for the computation of weak solutions of hyperbolic conservation laws is presented. These highly nonlinear schemes are obtained by applying a nonoscillatory first order accurae scheme to an appropriately modified flux function. The so derived second order accurate schemes achieve high resolution while preserving the robustness of the original nonoscillatory first order accurate scheme.

  11. Improving the Accuracy of a Heliocentric Potential (HCP) Prediction Model for the Aviation Radiation Dose

    NASA Astrophysics Data System (ADS)

    Hwang, Junga; Yoon, Kyoung-Won; Jo, Gyeongbok; Noh, Sung-Jun

    2016-12-01

    The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs), flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP) prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA). However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015). In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1) real-time daily sunspot assessments, (2) predictions of the daily HCP by our prediction algorithm, and (3) calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.

  12. North Atlantic Aerosol Properties for Radiative Impact Assessments. Derived from Column Closure Analyses in TARFOX and ACE-2

    NASA Technical Reports Server (NTRS)

    Russell, Philip A.; Bergstrom, Robert A.; Schmid, Beat; Livingston, John M.

    2000-01-01

    Aerosol effects on atmospheric radiative fluxes provide a forcing function that can change the climate in potentially significant ways. This aerosol radiative forcing is a major source of uncertainty in understanding the climate change of the past century and predicting future climate. To help reduce this uncertainty, the 1996 Tropospheric Aerosol Radiative Forcing Observational Experiment (TARFOX) and the 1997 Aerosol Characterization Experiment (ACE-2) measured the properties and radiative effects of aerosols over the Atlantic Ocean. Both experiments used remote and in situ measurements from aircraft and the surface, coordinated with overpasses by a variety of satellite radiometers. TARFOX focused on the urban-industrial haze plume flowing from the United States over the western Atlantic, whereas ACE-2 studied aerosols over the eastern Atlantic from both Europe and Africa. These aerosols often have a marked impact on satellite-measured radiances. However, accurate derivation of flux changes, or radiative forcing, from the satellite measured radiances or retrieved aerosol optical depths (AODs) remains a difficult challenge. Here we summarize key initial results from TARFOX and ACE-2, with a focus on closure analyses that yield aerosol microphysical models for use in improved assessments of flux changes. We show how one such model gives computed radiative flux sensitivities (dF/dAOD) that agree with values measured in TARFOX and preliminary values computed for the polluted marine boundary layer in ACE-2. A companion paper uses the model to compute aerosol-induced flux changes over the North Atlantic from AVHRR-derived AOD fields.

  13. Utility of high-resolution accurate MS to eliminate interferences in the bioanalysis of ribavirin and its phosphate metabolites.

    PubMed

    Wei, Cong; Grace, James E; Zvyaga, Tatyana A; Drexler, Dieter M

    2012-08-01

    The polar nucleoside drug ribavirin (RBV) combined with IFN-α is a front-line treatment for chronic hepatitis C virus infection. RBV acts as a prodrug and exerts its broad antiviral activity primarily through its active phosphorylated metabolite ribavirin 5´-triphosphate (RTP), and also possibly through ribavirin 5´-monophosphate (RMP). To study RBV transport, diffusion, metabolic clearance and its impact on drug-metabolizing enzymes, a LC-MS method is needed to simultaneously quantify RBV and its phosphorylated metabolites (RTP, ribavirin 5´-diphosphate and RMP). In a recombinant human UGT1A1 assay, the assay buffer components uridine and its phosphorylated derivatives are isobaric with RBV and its phosphorylated metabolites, leading to significant interference when analyzed by LC-MS with the nominal mass resolution mode. Presented here is a LC-MS method employing LC coupled with full-scan high-resolution accurate MS analysis for the simultaneous quantitative determination of RBV, RMP, ribavirin 5´-diphosphate and RTP by differentiating RBV and its phosphorylated metabolites from uridine and its phosphorylated derivatives by accurate mass, thus avoiding interference. The developed LC-high-resolution accurate MS method allows for quantitation of RBV and its phosphorylated metabolites, eliminating the interferences from uridine and its phosphorylated derivatives in recombinant human UGT1A1 assays.

  14. 78 FR 61420 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-03

    ... over-the-counter derivatives markets, promoting the prompt and accurate clearance of transactions and... CME is filing proposed rules changes that are limited to its business as a derivatives clearing... therefore provide investors with an expanded range of derivatives products for clearing. As such, the...

  15. Quantitative Studies of the Optical and UV Spectra of Galactic Early B Supergiants

    NASA Technical Reports Server (NTRS)

    Searle, S. C.; Prinja, R. K.; Massa, D.; Ryans, R.

    2008-01-01

    We undertake an optical and ultraviolet spectroscopic analysis of a sample of 20 Galactic B0-B5 supergiants of luminosity classes Ia, Ib, Iab, and II. Fundamental stellar parameters are obtained from optical diagnostics and a critical comparison of the model predictions to observed UV spectral features is made. Methods. Fundamental parameters (e.g., T(sub eff), log L(sub *), mass-loss rates and CNO abundances) are derived for individual stars using CMFGEN, a nLTE, line-blanketed model atmosphere code. The impact of these newly derived parameters on the Galactic B supergiant Ten scale, mass discrepancy, and wind-momentum luminosity relation is examined. Results. The B supergiant temperature scale derived here shows a reduction of about 1000-3000 K compared to previous results using unblanketed codes. Mass-loss rate estimates are in good agreement with predicted theoretical values, and all of the 20 BO-B5 supergiants analysed show evidence of CNO processing. A mass discrepancy still exists between spectroscopic and evolutionary masses, with the largest discrepancy occuring at log (L/(solar)L approx. 5.4. The observed WLR values calculated for B0-B0.7 supergiants are higher than predicted values, whereas the reverse is true for B1-B5 supergiants. This means that the discrepancy between observed and theoretical values cannot be resolved by adopting clumped (i.e., lower) mass-loss rates as for O stars. The most surprising result is that, although CMFGEN succeeds in reproducing the optical stellar spectrum accurately, it fails to precisely reproduce key UV diagnostics, such as the N v and C IV P Cygni profiles. This problem arises because the models are not ionised enough and fail to reproduce the full extent of the observed absorption trough of the P Cygni profiles. Conclusions. Newly-derived fundamental parameters for early B supergiants are in good agreement with similar work in the field. The most significant discovery, however, is the failure of CMFGEN to predict the correct ionisation fraction for some ions. Such findings add further support to revising the current standard model of massive star winds, as our understanding of these winds is incomplete without a precise knowledge of the ionisation structure and distribution of clumping in the wind. Key words. techniques: spectroscopic - stars: mass-loss - stars: supergiants - stars: abundances - stars: atmospheres - stars: fundamental parameters

  16. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Mosquito cell-derived West Nile virus replicon particles mimic arbovirus inoculum and have reduced spread in mice.

    PubMed

    Boylan, Brendan T; Moreira, Fernando R; Carlson, Tim W; Bernard, Kristen A

    2017-02-01

    Half of the human population is at risk of infection by an arthropod-borne virus. Many of these arboviruses, such as West Nile, dengue, and Zika viruses, infect humans by way of a bite from an infected mosquito. This infectious inoculum is insect cell-derived giving the virus particles distinct qualities not present in secondary infectious virus particles produced by infected vertebrate host cells. The insect cell-derived particles differ in the glycosylation of virus structural proteins and the lipid content of the envelope, as well as their induction of cytokines. Thus, in order to accurately mimic the inoculum delivered by arthropods, arboviruses should be derived from arthropod cells. Previous studies have packaged replicon genome in mammalian cells to produce replicon particles, which undergo only one round of infection, but no studies exist packaging replicon particles in mosquito cells. Here we optimized the packaging of West Nile virus replicon genome in mosquito cells and produced replicon particles at high concentration, allowing us to mimic mosquito cell-derived viral inoculum. These particles were mature with similar genome equivalents-to-infectious units as full-length West Nile virus. We then compared the mosquito cell-derived particles to mammalian cell-derived particles in mice. Both replicon particles infected skin at the inoculation site and the draining lymph node by 3 hours post-inoculation. The mammalian cell-derived replicon particles spread from the site of inoculation to the spleen and contralateral lymph nodes significantly more than the particles derived from mosquito cells. This in vivo difference in spread of West Nile replicons in the inoculum demonstrates the importance of using arthropod cell-derived particles to model early events in arboviral infection and highlights the value of these novel arthropod cell-derived replicon particles for studying the earliest virus-host interactions for arboviruses.

  18. Gel filtration of sialoglycoproteins.

    PubMed Central

    Alhadeff, J A

    1978-01-01

    The role of sialic acid in the gel-filtration behaviour of sialoglycoproteins was investigated by using the separated isoenzymes of purified human liver alpha-L-fucosidase and several other well-known sialic acid-containing glycoproteins (fetuin, alpha1-acid glycoprotein, thyroglobulin and bovine submaxillary mucin). For each glycoprotein studied, gel filtration of its desialylated derivative gave an apparent molecular weights much less than that expected just from removal of sialic acid. For the lower-molecular-weight glycoproteins (fetuin and alpha1-acid glyocprotein), gel filtration of the sialylated molecules led to apparent molecular weights much larger than the known values. The data indicate that gel filtration cannot be used for accurately determining the molecular weights of at least some sialoglycoproteins. Images Fig. 1. PMID:356853

  19. Thermal Conductivity of the Multicomponent Neutral Atmosphere

    NASA Astrophysics Data System (ADS)

    Pavlov, A. V.

    2017-12-01

    Approximate expressions for the thermal conductivity coefficient of the multicomponent neutral atmosphere consisting of N2, O2, O, He, and H are analyzed and evaluated for the atmospheric conditions by comparing them with that given by the rigorous hydrodynamic theory. The new approximations of the thermal conductivity coefficients of simple gases N2, O2, O, He, and H are derived and used. It is proved that the modified Mason and Saxena approximation of the atmospheric thermal conductivity coefficient is more accurate in reproducing the atmospheric values of the rigorous hydrodynamic thermal conductivity coefficient in comparison with those that are generally accepted in atmospheric studies. This approximation of the thermal conductivity coefficient is recommended to use in calculations of the neutral temperature of the atmosphere.

  20. Thermal Property Parameter Estimation of TPS Materials

    NASA Technical Reports Server (NTRS)

    Maddren, Jesse

    1998-01-01

    Accurate knowledge of the thermophysical properties of TPS (thermal protection system) materials is necessary for pre-flight design and post-flight data analysis. Thermal properties, such as thermal conductivity and the volumetric specific heat, can be estimated from transient temperature measurements using non-linear parameter estimation methods. Property values are derived by minimizing a functional of the differences between measured and calculated temperatures. High temperature thermal response testing of TPS materials is usually done in arc-jet or radiant heating facilities which provide a quasi one-dimensional heating environment. Last year, under the NASA-ASEE-Stanford Fellowship Program, my work focused on developing a radiant heating apparatus. This year, I have worked on increasing the fidelity of the experimental measurements, optimizing the experimental procedures and interpreting the data.

  1. ASCA Temperature Maps for Merging and Relaxed Clusters and Physics of the Cluster Gas

    NASA Technical Reports Server (NTRS)

    Markevitch, M.; Sarazin, C.; Nevalainen, J.; Vikhlinin, A.; Forman, W.

    1999-01-01

    ASCA temperature maps for several galaxy clusters undergoing strong mergers will be presented. From these maps, it is possible to estimate velocities of the colliding subclusters. I will discuss several interesting implications of these estimates for the physics of the cluster gas and the shape of the gravitational potential. I will also present temperature maps and profiles for several relaxed clusters selected for X-ray mass determination, and present the mass values derived without the assumption of isothermality. The accurate mass-temperature and luminosity-temperature relations will be discussed. This talk will review how AXAF will revolutionize X-ray astronomy through its radically better imaging and spectroscopic resolution. Examples from many fields of astrophysics will be given.

  2. Theoretical study of cathode surfaces and high-temperature superconductors

    NASA Technical Reports Server (NTRS)

    Mueller, Wolfgang

    1994-01-01

    The surface-dipole properties of model cathode surfaces have been investigated with relativistic scattered-wave cluster calculations. Work-function/coverage curves have been derived from these data by employing the depolarization model of interacting surface dipoles. Accurate values have been obtained for the minimum work functions of several low-work-function surfaces. In the series BaO on bcc W, hcp Os, and fcc Pt, BaO/Os shows a lower and BaO/Pt a higher work function than BaO/W, which is attributed to the different substrate crystal structures involved. Results are also presented on the electronic structure of the high-temperature superconductor YBa2Cu3O7, which has been investigated with fully relativistic calculations for the first time.

  3. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    NASA Astrophysics Data System (ADS)

    Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo

    2014-03-01

    Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  4. An asymptotically consistent approximant for the equatorial bending angle of light due to Kerr black holes

    NASA Astrophysics Data System (ADS)

    Barlow, Nathaniel S.; Weinstein, Steven J.; Faber, Joshua A.

    2017-07-01

    An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math. 70 21-48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations.

  5. A Model for Steering with Haptic-Force Guidance

    NASA Astrophysics Data System (ADS)

    Yang, Xing-Dong; Irani, Pourang; Boulanger, Pierre; Bischof, Walter F.

    Trajectory-based tasks are common in many applications and have been widely studied. Recently, researchers have shown that even very simple tasks, such as selecting items from cascading menus, can benefit from haptic-force guidance. Haptic guidance is also of significant value in many applications such as medical training, handwriting learning, and in applications requiring precise manipulations. There are, however, only very few guiding principles for selecting parameters that are best suited for proper force guiding. In this paper, we present a model, derived from the steering law that relates movement time to the essential components of a tunneling task in the presence of haptic-force guidance. Results of an experiment show that our model is highly accurate for predicting performance times in force-enhanced tunneling tasks.

  6. Algorithmic detectability threshold of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  7. Tmax Determined Using a Bayesian Estimation Deconvolution Algorithm Applied to Bolus Tracking Perfusion Imaging: A Digital Phantom Validation Study.

    PubMed

    Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio

    2017-01-10

    The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.

  8. Rapid Identification of Flavonoid Constituents Directly from PTP1B Inhibitive Extract of Raspberry (Rubus idaeus L.) Leaves by HPLC-ESI-QTOF-MS-MS.

    PubMed

    Li, Zhuan-Hong; Guo, Han; Xu, Wen-Bin; Ge, Juan; Li, Xin; Alimu, Mireguli; He, Da-Jun

    2016-01-01

    Many potential health benefits of raspberry (Rubus idaeus L.) leaves were attributed to polyphenolic compounds, especially flavonoids. In this study, the methanol extract of R. idaeus leaves showed significant protein tyrosine phosphatase-1B (PTP1B) inhibitory activity with IC50 value of 3.41 ± 0.01 µg mL(-1) Meanwhile, a rapid and reliable method, employed high-performance liquid chromatography coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry, was established for structure identification of flavonoids from PTP1B inhibitive extract of R. idaeus leaves using accurate mass measurement and characteristic fragmentation patterns. A total of 16 flavonoids, including 4 quercetin derivatives, 2 luteolin derivatives, 8 kaempferol derivatives and 2 isorhamnetin derivatives, were identified. Compounds 3: and 4: , Compounds 6: and 7: and Compounds 15: and 16: were isomers with different aglycones and different saccharides. Compounds 8: , 9: and 10: were isomers with the same aglycone and the same saccharide but different substituent positions. Compounds 11: and 12: were isomers with the same aglycone but different saccharides. Compounds 2: , 8: , 9: and 10: possessed the same substituent saccharide of glycuronic acid. Most of them were reported inR. idaeus for the first time. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Spatial Interpolation of Rain-field Dynamic Time-Space Evolution in Hong Kong

    NASA Astrophysics Data System (ADS)

    Liu, P.; Tung, Y. K.

    2017-12-01

    Accurate and reliable measurement and prediction of spatial and temporal distribution of rain-field over a wide range of scales are important topics in hydrologic investigations. In this study, geostatistical treatment of precipitation field is adopted. To estimate the rainfall intensity over a study domain with the sample values and the spatial structure from the radar data, the cumulative distribution functions (CDFs) at all unsampled locations were estimated. Indicator Kriging (IK) was used to estimate the exceedance probabilities for different pre-selected cutoff levels and a procedure was implemented for interpolating CDF values between the thresholds that were derived from the IK. Different interpolation schemes of the CDF were proposed and their influences on the performance were also investigated. The performance measures and visual comparison between the observed rain-field and the IK-based estimation suggested that the proposed method can provide fine results of estimation of indicator variables and is capable of producing realistic image.

  10. Neighborhood binary speckle pattern for deformation measurements insensitive to local illumination variation by digital image correlation.

    PubMed

    Zhao, Jian; Yang, Ping; Zhao, Yue

    2017-06-01

    Speckle pattern-based characteristics of digital image correlation (DIC) restrict its application in engineering fields and nonlaboratory environments, since serious decorrelation effect occurs due to localized sudden illumination variation. A simple and efficient speckle pattern adjusting and optimizing approach presented in this paper is aimed at providing a novel speckle pattern robust enough to resist local illumination variation. The new speckle pattern, called neighborhood binary speckle pattern, derived from original speckle pattern, is obtained by means of thresholding the pixels of a neighborhood at its central pixel value and considering the result as a binary number. The efficiency of the proposed speckle pattern is evaluated in six experimental scenarios. Experiment results indicate that the DIC measurements based on neighborhood binary speckle pattern are able to provide reliable and accurate results, even though local brightness and contrast of the deformed images have been seriously changed. It is expected that the new speckle pattern will have more potential value in engineering applications.

  11. Non-algal particles spatial-temporal distribution at global scale: a first estimation from satellite data

    NASA Astrophysics Data System (ADS)

    Bellacicco, Marco; Volpe, Gianluca; Colella, Simone; Pitarch, Jaime; Brando, Vittorio; Marullo, Salvatore; Santoleri, Rosalia

    2016-04-01

    Phytoplankton, heterotrophic bacteria and viruses contribute to the definition of the trophic regime of the oceans. While phytoplankton has been extensively studied from space, satellite studies of the autochthonous non-algal particles (NAP, i.e. bacteria and viruses) are relatively recent. Dedicated studies of the NAP distribution and dynamics can help to improve the understanding of marine ecosystem change, globally. Using the 18 years of Glob-Colour monthly satellite data, from the satellite particulate backscattering coefficient (bbp) the NAP global climatology was derived. High NAP values were found in productive regions like polar seas, the North Atlantic and the equatorial Pacific, as well as shelf regions affected by upwelling currents. In contrast, oligotrophic areas like the sub-tropical gyres displayed low NAP values. The annual and seasonal distribution as well as the temporal evolution will be discussed. In the future, improved understanding of the phytoplankton dynamics and physiology will benefit from accurate NAP calculations for different regions and seasons in relation to climate change studies.

  12. Application of a polarity parameter model to the separation of fat-soluble vitamins by reversed-phase HPLC.

    PubMed

    Herrero-Martínez, José Manuel; Izquierdo, Pere; Sales, Joaquim; Rosés, Martí; Bosch, Elisabeth

    2008-10-01

    The retention behavior of a series of fat-soluble vitamins has been established on the basis of a polarity retention model: log k = (log k)(0) + p (P(m) (N) - P(s) (N)), with p being the polarity of the solute, P(m) (N) the mobile phase polarity, and (log k)(0) and P(m) (N) two parameters for the characterization of the stationary phase. To estimate the p-values of solutes, two approaches have been considered. The first one is based on the application of a QSPR model, derived from the molecular structure of solutes and their log P(o/w), while in the second one, the p-values are obtained from several experimental measurements. The quality of prediction of both approaches has also been evaluated, with the second one giving more accurate results for the most lipophilic vitamins. This model allows establishing the best conditions to separate and determine simultaneously some fat-soluble vitamins in dairy foods.

  13. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  14. A Semiclassical Derivation of the QCD Coupling

    NASA Technical Reports Server (NTRS)

    Batchelor, David

    2009-01-01

    The measured value of the QCD coupling alpha(sub s) at the energy M(sub Zo), the variation of alpha(sub s) as a function of energy in QCD, and classical relativistic dynamics are used to investigate virtual pairs of quarks and antiquarks in vacuum fluctuations. For virtual pairs of bottom quarks and antiquarks, the pair lifetime in the classical model agrees with the lifetime from quantum mechanics to good approximation, and the action integral in the classical model agrees as well with the action that follows from the Uncertainty Principle. This suggests that the particles might have small de Broglie wavelengths and behave with well-localized pointlike dynamics. It also permits alpha(sub s) at the mass energy twice the bottom quark mass to be expressed as a simple fraction: 3/16. This is accurate to approximately 10%. The model in this paper predicts the measured value of alpha(sub s)(M(sub Zo)) to be 0.121, which is in agreement with recent measurements within statistical uncertainties.

  15. Filling the voids in the SRTM elevation model — A TIN-based delta surface approach

    NASA Astrophysics Data System (ADS)

    Luedeling, Eike; Siebert, Stefan; Buerkert, Andreas

    The Digital Elevation Model (DEM) derived from NASA's Shuttle Radar Topography Mission is the most accurate near-global elevation model that is publicly available. However, it contains many data voids, mostly in mountainous terrain. This problem is particularly severe in the rugged Oman Mountains. This study presents a method to fill these voids using a fill surface derived from Russian military maps. For this we developed a new method, which is based on Triangular Irregular Networks (TINs). For each void, we extracted points around the edge of the void from the SRTM DEM and the fill surface. TINs were calculated from these points and converted to a base surface for each dataset. The fill base surface was subtracted from the fill surface, and the result added to the SRTM base surface. The fill surface could then seamlessly be merged with the SRTM DEM. For validation, we compared the resulting DEM to the original SRTM surface, to the fill DEM and to a surface calculated by the International Center for Tropical Agriculture (CIAT) from the SRTM data. We calculated the differences between measured GPS positions and the respective surfaces for 187,500 points throughout the mountain range (ΔGPS). Comparison of the means and standard deviations of these values showed that for the void areas, the fill surface was most accurate, with a standard deviation of the ΔGPS from the mean ΔGPS of 69 m, and only little accuracy was lost by merging it to the SRTM surface (standard deviation of 76 m). The CIAT model was much less accurate in these areas (standard deviation of 128 m). The results show that our method is capable of transferring the relative vertical accuracy of a fill surface to the void areas in the SRTM model, without introducing uncertainties about the absolute elevation of the fill surface. It is well suited for datasets with varying altitude biases, which is a common problem of older topographic information.

  16. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. 3-D direct current resistivity anisotropic modelling by goal-oriented adaptive finite element methods

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong

    2018-01-01

    Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.

  18. Exact expressions and accurate approximations for the dependences of radius and index of refraction of solutions of inorganic solutes on relative humidity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, E.R.; Schwartz, S.

    2010-03-15

    Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volumemore » of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.« less

  19. Finite difference elastic wave modeling with an irregular free surface using ADER scheme

    NASA Astrophysics Data System (ADS)

    Almuhaidib, Abdulaziz M.; Nafi Toksöz, M.

    2015-06-01

    In numerical modeling of seismic wave propagation in the earth, we encounter two important issues: the free surface and the topography of the surface (i.e. irregularities). In this study, we develop a 2D finite difference solver for the elastic wave equation that combines a 4th- order ADER scheme (Arbitrary high-order accuracy using DERivatives), which is widely used in aeroacoustics, with the characteristic variable method at the free surface boundary. The idea is to treat the free surface boundary explicitly by using ghost values of the solution for points beyond the free surface to impose the physical boundary condition. The method is based on the velocity-stress formulation. The ultimate goal is to develop a numerical solver for the elastic wave equation that is stable, accurate and computationally efficient. The solver treats smooth arbitrary-shaped boundaries as simple plane boundaries. The computational cost added by treating the topography is negligible compared to flat free surface because only a small number of grid points near the boundary need to be computed. In the presence of topography, using 10 grid points per shortest shear-wavelength, the solver yields accurate results. Benchmark numerical tests using several complex models that are solved by our method and other independent accurate methods show an excellent agreement, confirming the validity of the method for modeling elastic waves with an irregular free surface.

  20. 78 FR 70368 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-25

    ... transparency for over-the-counter derivatives markets, promoting the prompt and accurate clearance of... Proposed Rule Change CME is filing a proposed rule change that is limited to its business as a derivatives... the Purpose of, and Statutory Basis for, the Proposed Rule Change CME is registered as a derivatives...

  1. Time-Resolved C-Arm Computed Tomographic Angiography Derived From Computed Tomographic Perfusion Acquisition: New Capability for One-Stop-Shop Acute Ischemic Stroke Treatment in the Angiosuite.

    PubMed

    Yang, Pengfei; Niu, Kai; Wu, Yijing; Struffert, Tobias; Dorfler, Arnd; Schafer, Sebastian; Royalty, Kevin; Strother, Charles; Chen, Guang-Hong

    2015-12-01

    Multimodal imaging using cone beam C-arm computed tomography (CT) may shorten the delay from ictus to revascularization for acute ischemic stroke patients with a large vessel occlusion. Largely because of limited temporal resolution, reconstruction of time-resolved CT angiography (CTA) from these systems has not yielded satisfactory results. We evaluated the image quality and diagnostic value of time-resolved C-arm CTA reconstructed using novel image processing algorithms. Studies were done under an Institutional Review Board approved protocol. Postprocessing of data from 21 C-arm CT dynamic perfusion acquisitions from 17 patients with acute ischemic stroke were done to derive time-resolved C-arm CTA images. Two observers independently evaluated image quality and diagnostic content for each case. ICC and receiver-operating characteristic analysis were performed to evaluate interobserver agreement and diagnostic value of this novel imaging modality. Time-resolved C-arm CTA images were successfully generated from 20 data sets (95.2%, 20/21). Two observers agreed well that the image quality for large cerebral arteries was good but was more limited for small cerebral arteries (distal to M1, A1, and P1). receiver-operating characteristic curves demonstrated excellent diagnostic value for detecting large vessel occlusions (area under the curve=0.987-1). Time-resolved CTAs derived from C-arm CT perfusion acquisitions provide high quality images that allowed accurate diagnosis of large vessel occlusions. Although image quality of smaller arteries in this study was not optimal ongoing modifications of the postprocessing algorithm will likely remove this limitation. Adding time-resolved C-arm CTAs to the capabilities of the angiography suite further enhances its suitability as a one-stop shop for care for patients with acute ischemic stroke. © 2015 American Heart Association, Inc.

  2. Determination of In-situ Rock Thermal Properties from Geophysical Log Data of SK-2 East Borehole, Continental Scientific Drilling Project of Songliao Basin, NE China

    NASA Astrophysics Data System (ADS)

    Zou, C.; Zhao, J.; Zhang, X.; Peng, C.; Zhang, S.

    2017-12-01

    Continental Scientific Drilling Project of Songliao Basin is a drilling project under the framework of ICDP. It aims at detecting Cretaceous environmental/climate changes and exploring potential resources near or beneath the base of the basin. The main hole, SK-2 East Borehole, has been drilled to penetrate through the Cretaceous formation. A variety of geophysical log data were collected from the borehole, which provide a great opportunity to analyze thermal properties of in-situ rock surrounding the borehole.The geothermal gradients were derived directly from temperature logs recorded 41 days after shut-in. The matrix and bulk thermal conductivity of rock were calculated with the geometric-mean model, in which mineral/rock contents and porosity were required as inputs (Fuchs et. al., 2014). Accurate mineral contents were available from the elemental capture spectroscopy logs and porosity data were derived from conventional logs (density, neutron and sonic). The heat production data were calculated by means of the concentrations of uranium, thorium and potassium determined from natural gamma-ray spectroscopy logs. Then, the heat flow was determined by using the values of geothermal gradients and thermal conductivity.The thermal parameters of in-situ rock over the depth interval of 0 4500m in the borehole were derived from geophysical logs. Statistically, the numerical ranges of thermal parameters are in good agreement with the measured values from both laboratory and field in this area. The results show that high geothermal gradient and heat flow exist over the whole Cretaceous formation, with anomalously high values in the Qingshankou formation (1372.0 1671.7m) and the Quantou formation (1671.7 2533.5m). It is meaningful for characterization of geothermal regime and exploration of geothermal resources in the basin. Acknowledgment: This work was supported by the "China Continental Scientific Drilling Program of Cretaceous Songliao Basin (CCSD-SK)" of China Geological Survey Projects (NO. 12120113017600).

  3. Assessment of mercury exposure among small-scale gold miners using mercury stable isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherman, Laura S., E-mail: lsaylors@umich.edu; Blum, Joel D.; Basu, Niladri

    Total mercury (Hg) concentrations in hair and urine are often used as biomarkers of exposure to fish-derived methylmercury (MeHg) and gaseous elemental Hg, respectively. We used Hg stable isotopes to assess the validity of these biomarkers among small-scale gold mining populations in Ghana and Indonesia. Urine from Ghanaian miners displayed similar Δ{sup 199}Hg values to Hg derived from ore deposits (mean urine Δ{sup 199}Hg=0.01‰, n=6). This suggests that urine total Hg concentrations accurately reflect exposure to inorganic Hg among this population. Hair samples from Ghanaian miners displayed low positive Δ{sup 199}Hg values (0.23–0.55‰, n=6) and low percentages of total Hgmore » as MeHg (7.6–29%, n=7). These data suggest that the majority of the Hg in these miners' hair samples is exogenously adsorbed inorganic Hg and not fish-derived MeHg. Hair samples from Indonesian gold miners who eat fish daily displayed a wider range of positive Δ{sup 199}Hg values (0.21–1.32‰, n=5) and percentages of total Hg as MeHg (32–72%, n=4). This suggests that total Hg in the hair samples from Indonesian gold miners is likely a mixture of ingested fish MeHg and exogenously adsorbed inorganic Hg. Based on data from both populations, we suggest that total Hg concentrations in hair samples from small-scale gold miners likely overestimate exposure to MeHg from fish consumption. - Highlights: • Mercury isotopes were measured in hair and urine from small-scale gold miners. • Mercury isotopes indicate that Hg in urine comes from mining activity. • Mercury isotopes suggest Hg in hair is a mixture of fish MeHg and inorganic Hg. • A large percentage of Hg in miner’s hair is released during amalgam burning and adsorbed.« less

  4. Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data.

    PubMed

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2017-05-01

    Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Discrimination of crop types with TerraSAR-X-derived information

    NASA Astrophysics Data System (ADS)

    Sonobe, Rei; Tani, Hiroshi; Wang, Xiufeng; Kobayashi, Nobuyuki; Shimamura, Hideki

    Although classification maps are required for management and for the estimation of agricultural disaster compensation, those techniques have yet to be established. This paper describes the comparison of three different classification algorithms for mapping crops in Hokkaido, Japan, using TerraSAR-X (including TanDEM-X) dual-polarimetric data. In the study area, beans, beets, grasslands, maize, potatoes and winter wheat were cultivated. In this study, classification using TerraSAR-X-derived information was performed. Coherence values, polarimetric parameters and gamma nought values were also obtained and evaluated regarding their usefulness in crop classification. Accurate classification may be possible with currently existing supervised learning models. A comparison between the classification and regression tree (CART), support vector machine (SVM) and random forests (RF) algorithms was performed. Even though J-M distances were lower than 1.0 on all TerraSAR-X acquisition days, good results were achieved (e.g., separability between winter wheat and grass) due to the characteristics of the machine learning algorithm. It was found that SVM performed best, achieving an overall accuracy of 95.0% based on the polarimetric parameters and gamma nought values for HH and VV polarizations. The misclassified fields were less than 100 a in area and 79.5-96.3% were less than 200 a with the exception of grassland. When some feature such as a road or windbreak forest is present in the TerraSAR-X data, the ratio of its extent to that of the field is relatively higher for the smaller fields, which leads to misclassifications.

  6. Patterns of astragalar fibular facet orientation in extant and fossil primates and their evolutionary implications.

    PubMed

    Boyer, Doug M; Seiffert, Erik R

    2013-07-01

    A laterally sloping fibular facet of the astragalus (=talus) has been proposed as one of few osteological synapomorphies of strepsirrhine primates, but the feature has never been comprehensively quantified. We describe a method for calculating fibular facet orientation on digital models of astragali as the angle between the planes of the fibular facet and the lateral tibial facet. We calculated this value in a sample that includes all major extant primate clades, a diversity of Paleogene primates, and nonprimate euarchontans (n = 304). Results show that previous characterization of a divide between extant haplorhines and strepsirrhines is accurate, with little overlap even when individual data points are considered. Fibular facet orientation is conserved in extant strepsirrhines despite major differences in locomotion and body size, while extant anthropoids are more variable (e.g., low values for catarrhines relative to non-callitrichine platyrrhines). Euprimate outgroups exhibit a mosaic of character states with Cynocephalus having a more obtuse strepsirrhine-like facet and sampled treeshrews and plesiadapiforms having more acute haplorhine-like facets. Surprisingly, the earliest species of the adapiform Cantius have steep haplorhine-like facets as well. We used a Bayesian approach to reconstruct the evolution of fibular facet orientation as a continuous character across a supertree of living and extinct primates. Mean estimates for crown Primatomorpha (97.9°), Primates (99.5°), Haplorhini (98.7°), and Strepsirrhini (108.2°) support the hypothesis that the strepsirrhine condition is derived, while lower values for crown Anthropoidea (92.8°) and Catarrhini (88.9°) are derived in the opposite direction. Copyright © 2013 Wiley Periodicals, Inc.

  7. Does bioelectrical impedance analysis accurately estimate the condition of threatened and endangered desert fish species?

    USGS Publications Warehouse

    Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.

    2017-01-01

    Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.

  8. 3D echocardiographic analysis of aortic annulus for transcatheter aortic valve replacement using novel aortic valve quantification software: Comparison with computed tomography.

    PubMed

    Mediratta, Anuj; Addetia, Karima; Medvedofsky, Diego; Schneider, Robert J; Kruse, Eric; Shah, Atman P; Nathan, Sandeep; Paul, Jonathan D; Blair, John E; Ota, Takeyoshi; Balkhy, Husam H; Patel, Amit R; Mor-Avi, Victor; Lang, Roberto M

    2017-05-01

    With the increasing use of transcatheter aortic valve replacement (TAVR) in patients with aortic stenosis (AS), computed tomography (CT) remains the standard for annulus sizing. However, 3D transesophageal echocardiography (TEE) has been an alternative in patients with contraindications to CT. We sought to (1) test the feasibility, accuracy, and reproducibility of prototype 3DTEE analysis software (Philips) for aortic annular measurements and (2) compare the new approach to the existing echocardiographic techniques. We prospectively studied 52 patients who underwent gated contrast CT, procedural 3DTEE, and TAVR. 3DTEE images were analyzed using novel semi-automated software designed for 3D measurements of the aortic root, which uses multiplanar reconstruction, similar to CT analysis. Aortic annulus measurements included area, perimeter, and diameter calculations from these measurements. The results were compared to CT-derived values. Additionally, 3D echocardiographic measurements (3D planimetry and mitral valve analysis software adapted for the aortic valve) were also compared to the CT reference values. 3DTEE image quality was sufficient in 90% of patients for aortic annulus measurements using the new software, which were in good agreement with CT (r-values: .89-.91) and small (<4%) inter-modality nonsignificant biases. Repeated measurements showed <10% measurements variability. The new 3D analysis was the more accurate and reproducible of the existing echocardiographic techniques. Novel semi-automated 3DTEE analysis software can accurately measure aortic annulus in patients with severe AS undergoing TAVR, in better agreement with CT than the existing methodology. Accordingly, intra-procedural TEE could potentially replace CT in patients where CT carries significant risk. © 2017, Wiley Periodicals, Inc.

  9. HARDI DATA DENOISING USING VECTORIAL TOTAL VARIATION AND LOGARITHMIC BARRIER

    PubMed Central

    Kim, Yunho; Thompson, Paul M.; Vese, Luminita A.

    2010-01-01

    In this work, we wish to denoise HARDI (High Angular Resolution Diffusion Imaging) data arising in medical brain imaging. Diffusion imaging is a relatively new and powerful method to measure the three-dimensional profile of water diffusion at each point in the brain. These images can be used to reconstruct fiber directions and pathways in the living brain, providing detailed maps of fiber integrity and connectivity. HARDI data is a powerful new extension of diffusion imaging, which goes beyond the diffusion tensor imaging (DTI) model: mathematically, intensity data is given at every voxel and at any direction on the sphere. Unfortunately, HARDI data is usually highly contaminated with noise, depending on the b-value which is a tuning parameter pre-selected to collect the data. Larger b-values help to collect more accurate information in terms of measuring diffusivity, but more noise is generated by many factors as well. So large b-values are preferred, if we can satisfactorily reduce the noise without losing the data structure. Here we propose two variational methods to denoise HARDI data. The first one directly denoises the collected data S, while the second one denoises the so-called sADC (spherical Apparent Diffusion Coefficient), a field of radial functions derived from the data. These two quantities are related by an equation of the form S = SSexp (−b · sADC) (in the noise-free case). By applying these two different models, we will be able to determine which quantity will most accurately preserve data structure after denoising. The theoretical analysis of the proposed models is presented, together with experimental results and comparisons for denoising synthetic and real HARDI data. PMID:20802839

  10. Second order finite-difference ghost-point multigrid methods for elliptic problems with discontinuous coefficients on an arbitrary interface

    NASA Astrophysics Data System (ADS)

    Coco, Armando; Russo, Giovanni

    2018-05-01

    In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.

  11. An assessment of the effectiveness of a random forest classifier for land-cover classification

    NASA Astrophysics Data System (ADS)

    Rodriguez-Galiano, V. F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J. P.

    2012-01-01

    Land cover monitoring using remotely sensed data requires robust classification methods which allow for the accurate mapping of complex land cover and land use categories. Random forest (RF) is a powerful machine learning classifier that is relatively unknown in land remote sensing and has not been evaluated thoroughly by the remote sensing community compared to more conventional pattern recognition techniques. Key advantages of RF include: their non-parametric nature; high classification accuracy; and capability to determine variable importance. However, the split rules for classification are unknown, therefore RF can be considered to be black box type classifier. RF provides an algorithm for estimating missing values; and flexibility to perform several types of data analysis, including regression, classification, survival analysis, and unsupervised learning. In this paper, the performance of the RF classifier for land cover classification of a complex area is explored. Evaluation was based on several criteria: mapping accuracy, sensitivity to data set size and noise. Landsat-5 Thematic Mapper data captured in European spring and summer were used with auxiliary variables derived from a digital terrain model to classify 14 different land categories in the south of Spain. Results show that the RF algorithm yields accurate land cover classifications, with 92% overall accuracy and a Kappa index of 0.92. RF is robust to training data reduction and noise because significant differences in kappa values were only observed for data reduction and noise addition values greater than 50 and 20%, respectively. Additionally, variables that RF identified as most important for classifying land cover coincided with expectations. A McNemar test indicates an overall better performance of the random forest model over a single decision tree at the 0.00001 significance level.

  12. Can Value-Added Measures of Teacher Performance Be Trusted?

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.

    2015-01-01

    We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures…

  13. The energy balance within a bubble column evaporator

    NASA Astrophysics Data System (ADS)

    Fan, Chao; Shahid, Muhammad; Pashley, Richard M.

    2018-05-01

    Bubble column evaporator (BCE) systems have been studied and developed for many applications, such as thermal desalination, sterilization, evaporative cooling and controlled precipitation. The heat supplied from warm/hot dry bubbles is to vaporize the water in various salt solutions until the solution temperature reaches steady state, which was derived into the energy balance of the BCE. The energy balance and utilization involved in each BCE process form the fundamental theory of these applications. More importantly, it opened a new field for the thermodynamics study in the form of heat and vapor transfer in the bubbles. In this paper, the originally derived energy balance was reviewed on the basis of its physics in the BCE process and compared with new proposed energy balance equations in terms of obtained the enthalpy of vaporization (Δ H vap) values of salt solutions from BCE experiments. Based on the analysis of derivation and Δ H vap values comparison, it is demonstrated that the original balance equation has high accuracy and precision, within 2% over 19-55 °C using improved systems. Also, the experimental and theoretical techniques used for determining Δ H vap values of salt solutions were reviewed for the operation conditions and their accuracies compared to the literature data. The BCE method, as one of the most simple and accurate techniques, offers a novel way to determine Δ H vap values of salt solutions based on its energy balance equation, which had error less than 3%. The thermal energy required to heat the inlet gas, the energy used for water evaporation in the BCE and the energy conserved from water vapor condensation were estimated in an overall energy balance analysis. The good agreement observed between input and potential vapor condensation energy illustrates the efficiency of the BCE system. Typical energy consumption levels for thermal desalination for producing pure water using the BCE process was also analyzed for different inlet air temperatures, and indicated the better energy efficiency, of 7.55 kW·h per m3 of pure water, compared to traditional thermal desalination techniques.

  14. An Aerosol Extinction-to-Backscatter Ratio Database Derived from the NASA Micro-Pulse Lidar Network: Applications for Space-based Lidar Observations

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Spinhime, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee; Bucholtz, Anthony

    2004-01-01

    Backscatter lidar signals are a function of both backscatter and extinction. Hence, these lidar observations alone cannot separate the two quantities. The aerosol extinction-to-backscatter ratio, S, is the key parameter required to accurately retrieve extinction and optical depth from backscatter lidar observations of aerosol layers. S is commonly defined as 4*pi divided by the product of the single scatter albedo and the phase function at 180-degree scattering angle. Values of S for different aerosol types are not well known, and are even more difficult to determine when aerosols become mixed. Here we present a new lidar-sunphotometer S database derived from Observations of the NASA Micro-Pulse Lidar Network (MPLNET). MPLNET is a growing worldwide network of eye-safe backscatter lidars co-located with sunphotometers in the NASA Aerosol Robotic Network (AERONET). Values of S for different aerosol species and geographic regions will be presented. A framework for constructing an S look-up table will be shown. Look-up tables of S are needed to calculate aerosol extinction and optical depth from space-based lidar observations in the absence of co-located AOD data. Applications for using the new S look-up table to reprocess aerosol products from NASA's Geoscience Laser Altimeter System (GLAS) will be discussed.

  15. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  16. Errors in finite-difference computations on curvilinear coordinate systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. W.; Thompson, J. F.

    1980-01-01

    Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.

  17. Method to determine the position-dependant metal correction factor for dose-rate equivalent laser testing of semiconductor devices

    DOEpatents

    Horn, Kevin M.

    2013-07-09

    A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.

  18. Estimation of Satellite-Based SO42- and NH4+ Composition of Ambient Fine Particulate Matter Over China Using Chemical Transport Model

    NASA Astrophysics Data System (ADS)

    Si, Y.; Li, S.; Chen, L.; Yu, C.; Zhu, W.

    2018-04-01

    Epidemiologic and health impact studies have examined the chemical composition of ambient PM2.5 in China but have been constrained by the paucity of long-term ground measurements. Using the GEOS-Chem chemical transport model and satellite-derived PM2.5 data, sulfate and ammonium levels were estimated over China from 2004 to 2014. A comparison of the satellite-estimated dataset with model simulations based on ground measurements obtained from the literature indicated our results are more accurate. Using satellite-derived PM2.5 data with a spatial resolution of 0.1° × 0.1°, we further presented finer satellite-estimated sulfate and ammonium concentrations in anthropogenic polluted regions, including the NCP (the North China Plain), the SCB (the Sichuan Basin) and the PRD (the Pearl River Delta). Linear regression results obtained on a national scale yielded an r value of 0.62, NMB of -35.9 %, NME of 48.2 %, ARB_50 % of 53.68 % for sulfate and an r value of 0.63, slope of 0.67, and intercept of 5.14 for ammonium. In typical regions, the satellite-derived dataset was significantly robust. Based on the satellite-derived dataset, the spatial-temporal variation of 11-year annual average satellite-derived SO42- and NH4+ concentrations and time series of monthly average concentrations were also investigated. On a national scale, both exhibited a downward trend each year between 2004 and 2014 (SO42-: -0.61 %; NH4+: -0.21 %), large values were mainly concentrated in the NCP and SCB. For regions captured at a finer resolution, the inter-annual variation trends presented a positive trend over the periods 2004-2007 and 2008-2011, followed by a negative trend over the period 2012-2014, and sulfate concentrations varied appreciably. Moreover, the seasonal distributions of the 11-year satellite-derived dataset over China were presented. The distribution of both sulfate and ammonium concentrations exhibited seasonal characteristics, with the seasonal concentrations ranking as follows: winter > summer > autumn > spring. High concentrations of these species were concentrated in the NCP and SCB, originating from coal-fired power plants and agricultural activities, respectively. Efforts to reduce sulfur dioxide (SO2) emissions have yielded remarkable results since the government has adopted stricter control measures in recent years. Moreover, ammonia emissions should be controlled while reducing the concentration of sulfur, nitrogen and particulate matter. This study provides an assessment of the population's exposure to certain chemical components.

  19. Improving Photometric Redshifts for Hyper Suprime-Cam

    NASA Astrophysics Data System (ADS)

    Speagle, Josh S.; Leauthaud, Alexie; Eisenstein, Daniel; Bundy, Kevin; Capak, Peter L.; Leistedt, Boris; Masters, Daniel C.; Mortlock, Daniel; Peiris, Hiranya; HSC Photo-z Team; HSC Weak Lensing Team

    2017-01-01

    Deriving accurate photometric redshift (photo-z) probability distribution functions (PDFs) are crucial science components for current and upcoming large-scale surveys. We outline how rigorous Bayesian inference and machine learning can be combined to quickly derive joint photo-z PDFs to individual galaxies and their parent populations. Using the first 170 deg^2 of data from the ongoing Hyper Suprime-Cam survey, we demonstrate our method is able to generate accurate predictions and reliable credible intervals over ~370k high-quality redshifts. We then use galaxy-galaxy lensing to empirically validate our predicted photo-z's over ~14M objects, finding a robust signal.

  20. Shape design sensitivity analysis using domain information

    NASA Technical Reports Server (NTRS)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  1. Carnivore specific bone bioapatite and collagen carbon isotope fractionations: Case studies of modern and fossil grey wolf populations

    NASA Astrophysics Data System (ADS)

    Fox-Dobbs, K.; Wheatley, P. V.; Koch, P. L.

    2006-12-01

    Stable isotope analyses of modern and fossil biogenic tissues are routinely used to reconstruct present and past vertebrate foodwebs. Accurate isotopic dietary reconstructions require a consumer and tissue specific understanding of how isotopes are sorted, or fractionated, between trophic levels. In this project we address the need for carnivore specific isotope variables derived from populations that are ecologically well- characterized. Specifically, we investigate the trophic difference in carbon isotope values between mammalian carnivore (wolf) bone bioapatite and herbivore (prey) bone bioapatite. We also compare bone bioapatite and collagen carbon isotope values collected from the same individuals. We analyzed bone specimens from two modern North American grey wolf (Canis lupus) populations (Isle Royale National Park, Michigan and Yellowstone National Park, Wyoming), and the ungulate herbivores that are their primary prey (moose and elk, respectively). Because the diets of both wolf populations are essentially restricted to a single prey species, there were no confounding effects due to carnivore diet variability. We measured a trophic difference of approximately -1.3 permil between carnivore (lower value) and herbivore (higher value) bone bioapatite carbon isotope values, and an average inter-tissue difference of 5.1 permil between carnivore bone collagen (lower value) and bioapatite (higher value) carbon isotope values. Both of these isotopic differences differ from previous estimates derived from a suite of African carnivores; our carnivore-herbivore bone bioapatite carbon isotope spacing is smaller (-1.3 vs. -4.0 permil), and our carnivore collagen-bioapatite carbon difference is larger (5.1 vs. 3.0 permil). These discrepancies likely result from comparing values measured from a single hypercarnivore (wolf) to average values calculated from several carnivore species, some of which are insectivorous or partly omnivorous. The trophic and inter-tissue differences we measured for wolves are applicable to future isotopic studies of consumers with purely carnivorous diets. For example, we collected bone bioapatite and collagen carbon isotope data from late Pleistocene grey wolf fossils from eastern Beringia (Fairbanks, Alaska), and used the modern inter-tissue difference presented here to verify bioapatite preservation. We then compared the wolves to herbivores (horse and caribou) from the same locality, and found the difference in their bone bioapatite carbon isotope values corresponded to the modern carnivore-herbivore trophic spacing given above. We therefore were able to conclude that horse and caribou were part of Beringian wolf diet.

  2. Wave-Ice Interaction and the Marginal Ice Zone

    DTIC Science & Technology

    2013-09-30

    concept, using a high-quality attitude and heading reference system ( AHRS ) together with an accurate twin-antennae GPS compass. The instruments logged...the AHRS parameters at 50Hz, together with GPS-derived fixes, heading (accurate to better than 1o) and velocities at 10Hz. The 30MB hourly files

  3. Skin temperature over the carotid artery provides an accurate noninvasive estimation of core temperature in infants and young children during general anesthesia.

    PubMed

    Jay, Ollie; Molgat-Seon, Yannick; Chou, Shirley; Murto, Kimmo

    2013-12-01

    The accurate measurement of core temperature is an essential aspect of intraoperative management in children. Invasive measurement sites are accurate but carry some health risks and cannot be used in certain patients. An accurate form of noninvasive thermometry is therefore needed. Our aim was to develop, and subsequently validate, separate models for estimating core temperature using different skin temperatures with an individualized correction factor. Forty-eight pediatric patients (0-36 months) undergoing elective surgery were separated into a modeling group (MG, n = 28) and validation group (VG, n = 20). Skin temperature was measured over the carotid artery (Tsk_carotid ), upper abdomen (Tsk_abd ), and axilla (Tsk_axilla ), while nasopharyngeal temperature (Tnaso ) was measured as a reference. In the MG, derived models for estimating Tnaso were: Tsk_carotid  + 0.52; Tsk_abd  + (0.076[body mass] + 0.02); and Tsk_axilla  + (0.081[body mass]-0.66). After adjusting raw Tsk_carotid, Tsk_abd , and Tsk_axilla values in the independent VG using these models, the mean bias (Predicted Tnaso - Actual Tnaso [with 95% confidence intervals]) was +0.03[+0.53, -0.50]°C, -0.05[+1.02, -1.07]°C, and -0.06[+1.21, -1.28°C], respectively. The percentage of values within ±0.5°C of Tnaso was 93.2%, 75.4%, and 66.1% for Tsk_carotid, Tsk_abd , and Tsk_axilla , respectively. Sensitivity and specificity for detecting hypothermia (Tnaso  < 36.0°C) was 0.88 and 0.91 for Tsk_carotid , 0.61 and 0.76 for Tsk_abd , and 0.91 and 0.73 for Tsk_axilla . Goodness-of-fit (R(2) ) relative to the line-of-identity was 0.74 (Tsk_carotid ), 0.34 (Tsk_abd ), and 0.15 (Tsk_axilla ). Skin temperature over the carotid artery, with a simple correction factor of +0.52°C, provides a viable noninvasive estimate of Tnaso in young children during elective surgery with a general anesthetic. © 2013 John Wiley & Sons Ltd.

  4. Total Acid Value Titration of Hydrotreated Biomass Fast Pyrolysis Oil: Determination of Carboxylic Acids and Phenolics with Multiple End-Point Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, E.; Alleman, T. L.; McCormick, R. L.

    Total acid value titration has long been used to estimate corrosive potential of petroleum crude oil and fuel oil products. The method commonly used for this measurement, ASTM D664, utilizes KOH in isopropanol as the titrant with potentiometric end point determination by pH sensing electrode and Ag/AgCl reference electrode with LiCl electrolyte. A natural application of the D664 method is titration of pyrolysis-derived bio-oil, which is a candidate for refinery upgrading to produce drop in fuels. Determining the total acid value of pyrolysis derived bio-oil has proven challenging and not necessarily amenable to the methodology employed for petroleum products duemore » to the different nature of acids present. We presented an acid value titration for bio-oil products in our previous publication which also utilizes potentiometry using tetrabutylammonium hydroxide in place of KOH as the titrant and tetraethylammonium bromide in place of LiCl as the reference electrolyte to improve the detection of these types of acids. This method was shown to detect numerous end points in samples of bio-oil that were not detected by D664. These end points were attributed to carboxylic acids and phenolics based on the results of HPLC and GC-MS studies. Additional work has led to refinement of the method and it has been established that both carboxylic acids and phenolics can be determined accurately. Use of pH buffer calibration to determine half-neutralization potentials of acids in conjunction with the analysis of model compounds has allowed us to conclude that this titration method is suitable for the determination of total acid value of pyrolysis oil and can be used to differentiate and quantify weak acid species. The measurement of phenolics in bio-oil is subject to a relatively high limit of detection, which may limit the utility of titrimetric methodology for characterizing the acidic potential of pyrolysis oil and products.« less

  5. Temperature dependence of Henry's law constants and KOA for simple and heteroatom-substituted PAHs by COSMO-RS

    NASA Astrophysics Data System (ADS)

    Parnis, J. Mark; Mackay, Donald; Harner, Tom

    2015-06-01

    Henry's Law constants (H) and octanol-air partition coefficients (KOA) for polycyclic aromatic hydrocarbons (PAHs) and selected nitrogen-, oxygen- and sulfur-containing derivatives have been computed using the COSMO-RS method between -5 and 40 °C in 5 °C intervals. The accuracy of the estimation was assessed by comparison of COSMOtherm values with published experimental temperature-dependence data for these and similar PAHs. COSMOtherm log H estimates with temperature-variation for parent PAHs are shown to have a root-mean-square (RMS) error of 0.38 (PAH), based on available validation data. Estimates of O-, N- and S-substituted derivative log H values are found to have RMS errors of 0.30 at 25 °C. Log KOA estimates with temperature variation from COSMOtherm are shown to be strongly correlated with experimental values for a small set of unsubstituted PAHs, but with a systematic underestimation and associated RMS error of 1.11. Similar RMS error of 1.64 was found for COSMO-RS estimates of a group of critically-evaluated log KOA values at room temperature. Validation demonstrates that COSMOtherm estimates of H and KOA are of sufficient accuracy to be used for property screening and preliminary environmental risk assessment, and perform very well for modeling the influence of temperature on partitioning behavior in the temperature range -5 to 40 °C. Temperature-dependent shifts of up to 2 log units in log H and one log unit for log KOA are predicted for PAH species over the range -5 and 40 °C. Within the family of PAH molecules, COSMO-RS is sufficiently accurate to make it useful as a source of estimates for modeling purposes, following corrections for systematic underestimation of KOA. Average changes in the values for log H and log KOA upon substitution are given for various PAH substituent categories, with the most significant shifts being associated with the ionizing nitro functionality and keto groups.

  6. The Araucaria Project: The Distance to the Local Group Galaxy NGC 6822 from Cepheid Variables Discovered in a Wide-Field Imaging Survey

    NASA Astrophysics Data System (ADS)

    Pietrzyński, Grzegorz; Gieren, Wolfgang; Udalski, Andrzej; Bresolin, Fabio; Kudritzki, Rolf-Peter; Soszyński, Igor; Szymański, Michał; Kubiak, Marcin

    2004-12-01

    We have obtained mosaic images of NGC 6822 in the V and I bands on 77 nights. From these data, we have conducted an extensive search for Cepheid variables over the entire field of the galaxy, and we have found 116 such variables with periods ranging from 1.7 to 124 days. We used the long-period (>5.6 days) Cepheids to establish the period-luminosity (PL) relations in V, I, and in the reddening-independent Wesenheit index, which are all very tightly defined. Fitting the OGLE LMC slopes in the various bands to our data, we have derived distance values for NGC 6822 in V, I, and WI, which agree very well among themselves. Our adopted best distance value from the reddening-free Wesenheit index is 23.34+/-0.04 (statistical) +/-0.05 (systematic) mag. This value agrees within the combined 1 σ uncertainties with a previous distance value derived for NGC 6822 by McAlary and coworkers from near-IR photometry of nine Cepheids, but our new value is significantly more accurate. We compare the slopes of the Cepheid PL relation in V and I as determined in the five best-observed nearby galaxies, which span a metallicity range from -1.0 to -0.3 dex, and find the data consistent with no metallicity dependence of the PL relation slope in this range. Comparing the magnitudes of 10 day Cepheids with the I-band magnitudes of the tip of the red giant branch in the same set of galaxies, there is no evidence either for a significant variation of the PL zero points in V and I. The available data limit such a zero-point variation to less than 0.03 mag in the considered low-metallicity regime. Based on observations obtained with the 1.3 m telescope at the Las Campanas Observatory.

  7. A novel AIF tracking method and comparison of DCE-MRI parameters using individual and population-based AIFs in human breast cancer

    NASA Astrophysics Data System (ADS)

    Li, Xia; Welch, E. Brian; Arlinghaus, Lori R.; Bapsi Chakravarthy, A.; Xu, Lei; Farley, Jaime; Loveless, Mary E.; Mayer, Ingrid A.; Kelley, Mark C.; Meszoely, Ingrid M.; Means-Powell, Julie A.; Abramson, Vandana G.; Grau, Ana M.; Gore, John C.; Yankeelov, Thomas E.

    2011-09-01

    Quantitative analysis of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data requires the accurate determination of the arterial input function (AIF). A novel method for obtaining the AIF is presented here and pharmacokinetic parameters derived from individual and population-based AIFs are then compared. A Philips 3.0 T Achieva MR scanner was used to obtain 20 DCE-MRI data sets from ten breast cancer patients prior to and after one cycle of chemotherapy. Using a semi-automated method to estimate the AIF from the axillary artery, we obtain the AIF for each patient, AIFind, and compute a population-averaged AIF, AIFpop. The extended standard model is used to estimate the physiological parameters using the two types of AIFs. The mean concordance correlation coefficient (CCC) for the AIFs segmented manually and by the proposed AIF tracking approach is 0.96, indicating accurate and automatic tracking of an AIF in DCE-MRI data of the breast is possible. Regarding the kinetic parameters, the CCC values for Ktrans, vp and ve as estimated by AIFind and AIFpop are 0.65, 0.74 and 0.31, respectively, based on the region of interest analysis. The average CCC values for the voxel-by-voxel analysis are 0.76, 0.84 and 0.68 for Ktrans, vp and ve, respectively. This work indicates that Ktrans and vp show good agreement between AIFpop and AIFind while there is a weak agreement on ve.

  8. Determination of the steam volume fraction in the event of loss of cooling of the spent fuel storage pool

    NASA Astrophysics Data System (ADS)

    Sledkov, R. M.; Galkin, I. Yu.; Stepanov, O. E.; Strebnev, N. A.

    2017-01-01

    When one solves engineering problems related to the cooling of fuel assemblies (FAs) in a spent fuel storage pool (SFSP) and the assessment of nuclear safety of FA storage in an SFSP in the initial event of loss of SFSP cooling, it is essential to determine the coolant density and, consequently, steam volume fractions φ in bundles of fuel elements at a pressure of 0.1-0.5 MPa. Such formulas for calculating φ that remain valid in a wide range of operating parameters and geometric shapes of channels and take the conditions of loss of SFSP cooling into account are currently almost lacking. The results of systematization and analysis of the available formulas for φ are reported in the present study. The calculated values were compared with the experimental data obtained in the process of simulating the conditions of FA cooling in an SFSP in the event of loss of its cooling. Six formulas for calculating the steam volume fraction, which were used in this comparison, were chosen from a total of 11 considered relations. As a result, the formulas producing the most accurate values of φ in the conditions of loss of SFSP cooling were selected. In addition, a relation that allows one to perform more accurate calculations of steam volume fractions in the conditions of loss of SFSP cooling was derived based on the Fedorov formula in the two-group approximation.

  9. Development of a coupled level set and immersed boundary method for predicting dam break flows

    NASA Astrophysics Data System (ADS)

    Yu, C. H.; Sheu, Tony W. H.

    2017-12-01

    Dam-break flow over an immersed stationary object is investigated using a coupled level set (LS)/immersed boundary (IB) method developed in Cartesian grids. This approach adopts an improved interface preserving level set method which includes three solution steps and the differential-based interpolation immersed boundary method to treat fluid-fluid and solid-fluid interfaces, respectively. In the first step of this level set method, the level set function ϕ is advected by a pure advection equation. The intermediate step is performed to obtain a new level set value through a new smoothed Heaviside function. In the final solution step, a mass correction term is added to the re-initialization equation to ensure the new level set is a distance function and to conserve the mass bounded by the interface. For accurately calculating the level set value, the four-point upwinding combined compact difference (UCCD) scheme with three-point boundary combined compact difference scheme is applied to approximate the first-order derivative term shown in the level set equation. For the immersed boundary method, application of the artificial momentum forcing term at points in cells consisting of both fluid and solid allows an imposition of velocity condition to account for the presence of solid object. The incompressible Navier-Stokes solutions are calculated using the projection method. Numerical results show that the coupled LS/IB method can not only predict interface accurately but also preserve the mass conservation excellently for the dam-break flow.

  10. Photolysis Rate Coefficient Calculations in Support of SOLVE II

    NASA Technical Reports Server (NTRS)

    Swartz, William H.

    2005-01-01

    A quantitative understanding of photolysis rate coefficients (or "j-values") is essential to determining the photochemical reaction rates that define ozone loss and other crucial processes in the atmosphere. j-Values can be calculated with radiative transfer models, derived from actinic flux observations, or inferred from trace gas measurements. The primary objective of the present effort was the accurate calculation of j-values in the Arctic twilight along NASA DC-8 flight tracks during the second SAGE III Ozone Loss and Validation Experiment (SOLVE II), based in Kiruna, Sweden (68 degrees N, 20 degrees E) during January-February 2003. The JHU/APL radiative transfer model was utilized to produce a large suite of j-values for photolysis processes (over 70 reactions) relevant to the upper troposphere and lower stratosphere. The calculations take into account the actual changes in ozone abundance and apparent albedo of clouds and the Earth surface along the aircraft flight tracks as observed by in situ and remote sensing platforms (e.g., EP-TOMS). A secondary objective was to analyze solar irradiance data from NCAR s Direct beam Irradiance Atmospheric Spectrometer (DIAS) on-board the NASA DC-8 and to start the development of a flexible, multi-species spectral fitting technique for the independent retrieval of O3,O2.02, and aerosol optical properties.

  11. An accurate and efficient method to predict the electronic excitation energies of BODIPY fluorescent dyes.

    PubMed

    Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min

    2013-03-15

    Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research. Copyright © 2012 Wiley Periodicals, Inc.

  12. Modeling Aboveground Biomass in Hulunber Grassland Ecosystem by Using Unmanned Aerial Vehicle Discrete Lidar

    PubMed Central

    Wang, Dongliang; Xin, Xiaoping; Shao, Quanqin; Brolly, Matthew; Zhu, Zhiliang; Chen, Jin

    2017-01-01

    Accurate canopy structure datasets, including canopy height and fractional cover, are required to monitor aboveground biomass as well as to provide validation data for satellite remote sensing products. In this study, the ability of an unmanned aerial vehicle (UAV) discrete light detection and ranging (lidar) was investigated for modeling both the canopy height and fractional cover in Hulunber grassland ecosystem. The extracted mean canopy height, maximum canopy height, and fractional cover were used to estimate the aboveground biomass. The influences of flight height on lidar estimates were also analyzed. The main findings are: (1) the lidar-derived mean canopy height is the most reasonable predictor of aboveground biomass (R2 = 0.340, root-mean-square error (RMSE) = 81.89 g·m−2, and relative error of 14.1%). The improvement of multiple regressions to the R2 and RMSE values is unobvious when adding fractional cover in the regression since the correlation between mean canopy height and fractional cover is high; (2) Flight height has a pronounced effect on the derived fractional cover and details of the lidar data, but the effect is insignificant on the derived canopy height when the flight height is within the range (<100 m). These findings are helpful for modeling stable regressions to estimate grassland biomass using lidar returns. PMID:28106819

  13. Modeling Aboveground Biomass in Hulunber Grassland Ecosystem by Using Unmanned Aerial Vehicle Discrete Lidar.

    PubMed

    Wang, Dongliang; Xin, Xiaoping; Shao, Quanqin; Brolly, Matthew; Zhu, Zhiliang; Chen, Jin

    2017-01-19

    Accurate canopy structure datasets, including canopy height and fractional cover, are required to monitor aboveground biomass as well as to provide validation data for satellite remote sensing products. In this study, the ability of an unmanned aerial vehicle (UAV) discrete light detection and ranging (lidar) was investigated for modeling both the canopy height and fractional cover in Hulunber grassland ecosystem. The extracted mean canopy height, maximum canopy height, and fractional cover were used to estimate the aboveground biomass. The influences of flight height on lidar estimates were also analyzed. The main findings are: (1) the lidar-derived mean canopy height is the most reasonable predictor of aboveground biomass ( R ² = 0.340, root-mean-square error (RMSE) = 81.89 g·m -2 , and relative error of 14.1%). The improvement of multiple regressions to the R ² and RMSE values is unobvious when adding fractional cover in the regression since the correlation between mean canopy height and fractional cover is high; (2) Flight height has a pronounced effect on the derived fractional cover and details of the lidar data, but the effect is insignificant on the derived canopy height when the flight height is within the range (<100 m). These findings are helpful for modeling stable regressions to estimate grassland biomass using lidar returns.

  14. A New, Large-scale Map of Interstellar Reddening Derived from H I Emission

    NASA Astrophysics Data System (ADS)

    Lenz, Daniel; Hensley, Brandon S.; Doré, Olivier

    2017-09-01

    We present a new map of interstellar reddening, covering the 39% of the sky with low H I column densities ({N}{{H}{{I}}}< 4× {10}20 cm-2 or E(B-V)≈ 45 mmag) at 16\\buildrel{ \\prime}\\over{.} 1 resolution, based on all-sky observations of Galactic H I emission by the HI4PI Survey. In this low-column-density regime, we derive a characteristic value of {N}{{H}{{I}}}/E(B-V)=8.8 × {10}21 {{cm}}2 {{mag}}-1 for gas with | {v}{LSR}| < 90 km s-1 and find no significant reddening associated with gas at higher velocities. We compare our H I-based reddening map with the Schlegel et al. (SFD) reddening map and find them consistent to within a scatter of ≃ 5 mmag. Further, the differences between our map and the SFD map are in excellent agreement with the low-resolution (4\\buildrel{\\circ}\\over{.} 5) corrections to the SFD map derived by Peek and Graves based on observed reddening toward passive galaxies. We therefore argue that our H I-based map provides the most accurate interstellar reddening estimates in the low-column-density regime to date. Our reddening map is made publicly available at doi.org/10.7910/DVN/AFJNWJ.

  15. Estimating aboveground biomass in interior Alaska with Landsat data and field measurements

    USGS Publications Warehouse

    Ji, Lei; Wylie, Bruce K.; Nossov, Dana R.; Peterson, Birgit E.; Waldrop, Mark P.; McFarland, Jack W.; Rover, Jennifer R.; Hollingsworth, Teresa N.

    2012-01-01

    Terrestrial plant biomass is a key biophysical parameter required for understanding ecological systems in Alaska. An accurate estimation of biomass at a regional scale provides an important data input for ecological modeling in this region. In this study, we created an aboveground biomass (AGB) map at 30-m resolution for the Yukon Flats ecoregion of interior Alaska using Landsat data and field measurements. Tree, shrub, and herbaceous AGB data in both live and dead forms were collected in summers and autumns of 2009 and 2010. Using the Landsat-derived spectral variables and the field AGB data, we generated a regression model and applied this model to map AGB for the ecoregion. A 3-fold cross-validation indicated that the AGB estimates had a mean absolute error of 21.8 Mg/ha and a mean bias error of 5.2 Mg/ha. Additionally, we validated the mapping results using an airborne lidar dataset acquired for a portion of the ecoregion. We found a significant relationship between the lidar-derived canopy height and the Landsat-derived AGB (R2 = 0.40). The AGB map showed that 90% of the ecoregion had AGB values ranging from 10 Mg/ha to 134 Mg/ha. Vegetation types and fires were the primary factors controlling the spatial AGB patterns in this ecoregion.

  16. Examination of Soil Moisture Retrieval Using SIR-C Radar Data and a Distributed Hydrological Model

    NASA Technical Reports Server (NTRS)

    Hsu, A. Y.; ONeill, P. E.; Wood, E. F.; Zion, M.

    1997-01-01

    A major objective of soil moisture-related hydrological-research during NASA's SIR-C/X-SAR mission was to determine and compare soil moisture patterns within humid watersheds using SAR data, ground-based measurements, and hydrologic modeling. Currently available soil moisture-inversion methods using active microwave data are only accurate when applied to bare and slightly vegetated surfaces. Moreover, as the surface dries down, the number of pixels that can provide estimated soil moisture by these radar inversion methods decreases, leading to less accuracy and, confidence in the retrieved soil moisture fields at the watershed scale. The impact of these errors in microwave- derived soil moisture on hydrological modeling of vegetated watersheds has yet to be addressed. In this study a coupled water and energy balance model operating within a topographic framework is used to predict surface soil moisture for both bare and vegetated areas. In the first model run, the hydrological model is initialized using a standard baseflow approach, while in the second model run, soil moisture values derived from SIR-C radar data are used for initialization. The results, which compare favorably with ground measurements, demonstrate the utility of combining radar-derived surface soil moisture information with basin-scale hydrological modeling.

  17. Estimating thermal performance curves from repeated field observations

    USGS Publications Warehouse

    Childress, Evan; Letcher, Benjamin H.

    2017-01-01

    Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.

  18. Correlates of Physician Retention at Tripler Army Medical Center

    DTIC Science & Technology

    1991-12-01

    benefits each program produces, the explicit and implicit values of those benefits, and the program’s direct and indirect costs. For the most part, there...data is also remarkably accurate (within sampling error). A survey of a particular group can give a very accurate picture of the groups values , beliefs...opportunity to teach and administer training programs. - Military medicine is exciting, challenging, and varied. Because of the value of written comments

  19. Can Value-Added Measures of Teacher Performance Be Trusted? Working Paper #18

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Reckase, Mark D.; Woolridge, Jeffrey M.

    2012-01-01

    We investigate whether commonly used value-added estimation strategies can produce accurate estimates of teacher effects. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. No one method accurately captures true teacher effects in all scenarios,…

  20. Discontinuity of the exchange-correlation potential and the functional derivative of the noninteracting kinetic energy as the number of electrons crosses integer boundaries in Li, Be, and B.

    PubMed

    Morrison, Robert C

    2015-01-07

    Accurate densities were determined from configuration interaction wave functions for atoms and ions of Li, Be, and B with up to four electrons. Exchange-correlation potentials, Vxc(r), and functional derivatives of the noninteracting kinetic energy, δK[ρ]/δρ(r), obtained from these densities were used to examine their discontinuities as the number of electrons N increases across integer boundaries for N = 1, N = 2, and N = 3. These numerical results are consistent with conclusions that the discontinuities are characterized by a jump in the chemical potential while the shape of Vxc(r) varies continuously as an integer boundary is crossed. The discontinuity of the Vxc(r) is positive, depends on the ionization potential, electron affinity, and orbital energy differences, and the discontinuity in δK[ρ]/δρ(r) depends on the difference between the energies of the highest occupied and lowest unoccupied orbitals. The noninteracting kinetic energy and the exchange correlation energy have been computed for integer and noninteger values of N between 1 and 4.

  1. Using NDVI to measure precipitation in semi-arid landscapes

    USGS Publications Warehouse

    Birtwhistle, Amy N.; Laituri, Melinda; Bledsoe, Brian; Friedman, Jonathan M.

    2016-01-01

    Measuring precipitation in semi-arid landscapes is important for understanding the processes related to rainfall and run-off; however, measuring precipitation accurately can often be challenging especially within remote regions where precipitation instruments are scarce. Typically, rain-gauges are sparsely distributed and research comparing rain-gauge and RADAR precipitation estimates reveal that RADAR data are often misleading, especially for monsoon season convective storms. This study investigates an alternative way to map the spatial and temporal variation of precipitation inputs along ephemeral stream channels using Normalized Difference Vegetation Index (NDVI) derived from Landsat Thematic Mapper imagery. NDVI values from 26 years of pre- and post-monsoon season Landsat imagery were derived across Yuma Proving Ground (YPG), a region covering 3,367 km2 of semiarid landscapes in southwestern Arizona, USA. The change in NDVI from a pre-to post-monsoon season image along ephemeral stream channels explained 73% of the variance in annual monsoonal precipitation totals from a nearby rain-gauge. In addition, large seasonal changes in NDVI along channels were useful in determining when and where flow events have occurred.

  2. A Polar Initial Alignment Algorithm for Unmanned Underwater Vehicles

    PubMed Central

    Yan, Zheping; Wang, Lu; Wang, Tongda; Zhang, Honghan; Zhang, Xun; Liu, Xiangling

    2017-01-01

    Due to its highly autonomy, the strapdown inertial navigation system (SINS) is widely used in unmanned underwater vehicles (UUV) navigation. Initial alignment is crucial because the initial alignment results will be used as the initial SINS value, which might affect the subsequent SINS results. Due to the rapid convergence of Earth meridians, there is a calculation overflow in conventional initial alignment algorithms, making conventional initial algorithms are invalid for polar UUV navigation. To overcome these problems, a polar initial alignment algorithm for UUV is proposed in this paper, which consists of coarse and fine alignment algorithms. Based on the principle of the conical slow drift of gravity, the coarse alignment algorithm is derived under the grid frame. By choosing the velocity and attitude as the measurement, the fine alignment with the Kalman filter (KF) is derived under the grid frame. Simulation and experiment are realized among polar, conventional and transversal initial alignment algorithms for polar UUV navigation. Results demonstrate that the proposed polar initial alignment algorithm can complete the initial alignment of UUV in the polar region rapidly and accurately. PMID:29168735

  3. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  4. Quantification of 60Fe atoms by MC-ICP-MS for the redetermination of the half-life.

    PubMed

    Kivel, Niko; Schumann, Dorothea; Günther-Leopold, Ines

    2013-03-01

    In many scientific fields, the half-life of radionuclides plays an important role. The accurate knowledge of this parameter has direct impact on, e.g., age determination of archeological artifacts and of the elemental synthesis in the universe. In order to derive the half-life of a long-lived radionuclide, the activity and the absolute number of atoms have to be analyzed. Whereas conventional radiation measurement methods are typically applied for activity determinations, the latter can be determined with high accuracy by mass spectrometric techniques. Over the past years, the half-lives of several radionuclides have been specified by means of multiple-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) complementary to the earlier reported values mainly derived by accelerator mass spectrometry. The present paper discusses all critical aspects (amount of material, radiochemical sample preparation, interference correction, isotope dilution mass spectrometry, calculation of measurement uncertainty) for a precise analysis of the number of atoms by MC-ICP-MS exemplified for the recently published half-life determination of 60Fe (Rugel et al, Phys Rev Lett 103:072502, 2009).

  5. OPERA, an automatic PSF reconstruction software for Shack-Hartmann AO systems: application to Altair

    NASA Astrophysics Data System (ADS)

    Jolissaint, Laurent; Veran, Jean-Pierre; Marino, Jose

    2004-10-01

    When doing high angular resolution imaging with adaptive optics (AO), it is of crucial importance to have an accurate knowledge of the point spread function associated with each observation. Applications are numerous: image contrast enhancement by deconvolution, improved photometry and astrometry, as well as real time AO performance evaluation. In this paper, we present our work on automatic PSF reconstruction based on control loop data, acquired simultaneously with the observation. This problem has already been solved for curvature AO systems. To adapt this method to another type of WFS, a specific analytical noise propagation model must be established. For the Shack-Hartmann WFS, we are able to derive a very accurate estimate of the noise on each slope measurement, based on the covariances of the WFS CCD pixel values in the corresponding sub-aperture. These covariances can be either derived off-line from telemetry data, or calculated by the AO computer during the acquisition. We present improved methods to determine 1) r0 from the DM drive commands, which includes an estimation of the outer scale L0 2) the contribution of the high spatial frequency component of the turbulent phase, which is not corrected by the AO system and is scaled by r0. This new method has been implemented in an IDL-based software called OPERA (Performance of Adaptive Optics). We have tested OPERA on Altair, the recently commissioned Gemini-North AO system, and present our preliminary results. We also summarize the AO data required to run OPERA on any other AO system.

  6. SU-F-J-10: Sliding Mode Control of a SMA Actuated Active Flexible Needle for Medical Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podder, T

    Purpose: In medical interventional procedures such as brachytherapy, ablative therapies and biopsy precise steering and accurate placement of needles are very important for anatomical obstacle avoidance and accurate targeting. This study presents the efficacy of a sliding mode controller for Shape Memory Alloy (SMA) actuated flexible needle for medical procedures. Methods: Second order system dynamics of the SMA actuated active flexible needle was used for deriving the sliding mode control equations. Both proportional-integral-derivative (PID) and adaptive PID sliding mode control (APIDSMC) algorithms were developed and implemented. The flexible needle was attached at the end of a 6 DOF robotic system.more » Through LabView programming environment, the control commands were generated using the PID and APIDSMC algorithms. Experiments with artificial tissue mimicking phantom were performed to evaluate the performance of the controller. The actual needle tip position was obtained using an electromagnetic (EM) tracking sensor (Aurora, NDI, waterloo, Canada) at a sampling period of 1ms. During experiment, external disturbances were created applying force and thermal shock to investigate the robustness of the controllers. Results: The root mean square error (RMSE) values for APIDSMC and PID controllers were 0.75 mm and 0.92 mm, respectively, for sinusoidal reference input. In the presence of external disturbances, the APIDSMC controller showed much smoother and less overshooting response compared to that of the PID controller. Conclusion: Performance of the APIDSMC was superior to the PID controller. The APIDSMC was proved to be more effective controller in compensating the SMA uncertainties and external disturbances with clinically acceptable thresholds.« less

  7. Modelling surface-water depression storage in a Prairie Pothole Region

    USGS Publications Warehouse

    Hay, Lauren E.; Norton, Parker A.; Viger, Roland; Markstrom, Steven; Regan, R. Steven; Vanderhoof, Melanie

    2018-01-01

    In this study, the Precipitation-Runoff Modelling System (PRMS) was used to simulate changes in surface-water depression storage in the 1,126-km2 Upper Pipestem Creek basin located within the Prairie Pothole Region of North Dakota, USA. The Prairie Pothole Region is characterized by millions of small water bodies (or surface-water depressions) that provide numerous ecosystem services and are considered an important contribution to the hydrologic cycle. The Upper Pipestem PRMS model was extracted from the U.S. Geological Survey's (USGS) National Hydrologic Model (NHM), developed to support consistent hydrologic modelling across the conterminous United States. The Geospatial Fabric database, created for the USGS NHM, contains hydrologic model parameter values derived from datasets that characterize the physical features of the entire conterminous United States for 109,951 hydrologic response units. Each hydrologic response unit in the Geospatial Fabric was parameterized using aggregated surface-water depression area derived from the National Hydrography Dataset Plus, an integrated suite of application-ready geospatial datasets. This paper presents a calibration strategy for the Upper Pipestem PRMS model that uses normalized lake elevation measurements to calibrate the parameters influencing simulated fractional surface-water depression storage. Results indicate that inclusion of measurements that give an indication of the change in surface-water depression storage in the calibration procedure resulted in accurate changes in surface-water depression storage in the water balance. Regionalized parameterization of the USGS NHM will require a proxy for change in surface-storage to accurately parameterize surface-water depression storage within the USGS NHM.

  8. Development of a quantum mechanics-based free-energy perturbation method: use in the calculation of relative solvation free energies.

    PubMed

    Reddy, M Rami; Singh, U C; Erion, Mark D

    2004-05-26

    Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.

  9. Heterochromatic Flicker Photometry for Objective Lens Density Quantification.

    PubMed

    Najjar, Raymond P; Teikari, Petteri; Cornut, Pierre-Loïc; Knoblauch, Kenneth; Cooper, Howard M; Gronfier, Claude

    2016-03-01

    Although several methods have been proposed to evaluate lens transmittance, to date there is no consensual in vivo approach in clinical practice. The aim of this study was to compare ocular lens density and transmittance measurements obtained by an improved psychophysical scotopic heterochromatic flicker photometry (sHFP) technique to the results obtained by three other measures: a psychophysical threshold technique, a Scheimpflug imaging technique, and a clinical assessment using a validated subjective scale. Forty-three subjects (18 young, 9 middle aged, and 16 older) were included in the study. Individual lens densities were measured and transmittance curves were derived from sHFP indexes. Ocular lens densities were compared across methods by using linear regression analysis. The four approaches showed a quadratic increase in lens opacification with age. The sHFP technique revealed that transmittance decreased with age over the entire visual spectrum. This decrease was particularly pronounced between young and older participants in the short (53.03% decrease in the 400-500 nm range) wavelength regions of the light spectrum. Lens density derived from sHFP highly correlated with the values obtained with the other approaches. Compared to other objective measures, sHFP also showed the lowest variability and the best fit with a quadratic trend (r2 = 0.71) of lens density increase as a function of age. The sHFP technique offers a practical, reliable, and accurate method to measure lens density in vivo and predict lens transmittance over the visible spectrum. An accurate quantification of lens transmittance should be obtained in clinical practice, but also in research in visual and nonvisual photoreception.

  10. Gas-phase behaviour of Ru(II) cyclopentadienyl-derived complexes with N-coordinated ligands by electrospray ionization mass spectrometry: fragmentation pathways and energetics.

    PubMed

    Madeira, Paulo J Amorim; Morais, Tânia S; Silva, Tiago J L; Florindo, Pedro; Garcia, M Helena

    2012-08-15

    The gas-phase behaviour of six Ru(II) cyclopentadienyl-derived complexes with N-coordinated ligands, compounds with antitumor activities against several cancer lines, was studied. This was performed with the intent of establishing fragmentation pathways and to determine the Ru-L(N) and Ru-L(P) ligand bond dissociation energies. Such knowledge can be an important tool for the postulation of the mechanisms of action of these anticancer drugs. Two types of instruments equipped with electrospray ionisation were used (ion trap and a Fourier transform ion cyclotron resonance (FTICR) mass spectrometer). The dissociation energies were determined using energy-variable collision-induced dissociation measurements in the ion trap. The FTICR instrument was used to perform MS(n) experiments on one of the compounds and to obtain accurate mass measurements. Theoretical calculations were performed at the density functional theory (DFT) level using two different functionals (B3LYP and M06L) to estimate the dissociation energies of the complexes under study. The influence of the L(N) on the bond dissociation energy (D) of RuCp compounds with different nitrogen ligands was studied. The lability order of L(N) was: imidazole<1-butylimidazole<5-phenyl-1H-tetrazole<1-benzylimidazole. Both the functionals used gave the following ligand lability order: imidazole<1-benzylimidazole<5-phenyl-1H-tetrazole<1-butylimidazole. It is clear that there is an inversion between 1-benzylimidazole and 1-butylimidazole for the experimental and theoretical lability orders. The M06L functional afforded values of D closer to the experimental values. The type of phosphane (L(P) ) influenced the dissociation energies, with values of D being higher for Ru-L(N) with 1-butylimidazole when the phosphane was 1,2-bis(diphenylphosphino)ethane. The Ru-L(P) bond dissociation energy for triphenylphosphane was independent of the type of complex. The D values of Ru-L(N) and Ru-L(P) were determined for all six compounds and compared with the values calculated by the DFT method. For the imidazole-derived ligands the energy trend was rationalized in terms of the increasing extension of the σ-donation/π-backdonation effect. The bond dissociation energy of Ru-PPh(3) was independent of the fragmentations. Copyright © 2012 John Wiley & Sons, Ltd.

  11. A new formula for estimation of standard liver volume using computed tomography-measured body thickness.

    PubMed

    Ma, Ka Wing; Chok, Kenneth S H; Chan, Albert C Y; Tam, Henry S C; Dai, Wing Chiu; Cheung, Tan To; Fung, James Y Y; Lo, Chung Mau

    2017-09-01

    The objective of this article is to derive a more accurate and easy-to-use formula for finding estimated standard liver volume (ESLV) using novel computed tomography (CT) measurement parameters. New formulas for ESLV have been emerging that aim to improve the accuracy of estimation. However, many of these formulas contain body surface area measurements and logarithms in the equations that lead to a more complicated calculation. In addition, substantial errors in ESLV using these old formulas have been shown. An improved version of the formula for ESLV is needed. This is a retrospective cohort of consecutive living donor liver transplantations from 2005 to 2016. Donors were randomly assigned to either the formula derivation or validation groups. Total liver volume (TLV) measured by CT was used as the reference for a linear regression analysis against various patient factors. The derived formula was compared with the existing formulas. There were 722 patients (197 from the derivation group, 164 from the validation group, and 361 from the recipient group) involved in the study. The donor's body weight (odds ratio [OR], 10.42; 95% confidence interval [CI], 7.25-13.60; P < 0.01) and body thickness (OR, 2.00; 95% CI, 0.36-3.65; P = 0.02) were found to be independent factors for the TLV calculation. A formula for TLV (cm 3 ) was derived: 2 × thickness (mm) + 10 × weight (kg) + 190 with R 2 0.48, which was the highest when compared with the 4 other most often cited formulas. This formula remained superior to other published formulas in the validation set analysis (R 2 , 5.37; interclass correlation coefficient, 0.74). Graft weight/ESLV values calculated by the new formula were shown to have the highest correlation with delayed graft function (C-statistic, 0.79; 95% CI, 0.69-0.90; P < 0.01). The new formula (2 × thickness + 10 × weight + 190) represents the first study proposing the use of CT-measured body thickness which is novel, easy to use, and the most accurate for ESLV. Liver Transplantation 23 1113-1122 2017 AASLD. © 2017 by the American Association for the Study of Liver Diseases.

  12. Reconciling Estimates of the Value to Firms of Reduced Regulatory Delay in the Marketing of Their New Drugs.

    PubMed

    Wilmoth, Daniel R

    2015-12-01

    The prescription drug user fee program provides additional resources to the U.S. Food and Drug Administration at the expense of regulated firms. Those resources accelerate the review of new drugs. Faster approvals allow firms to realize profits sooner, and the program is supported politically by industry. However, published estimates of the value to firms of reduced regulatory delay vary dramatically. It is shown here that this variation is driven largely by differences in methods that correspond to differences in implicit assumptions about the effects of reduced delay. Theoretical modeling is used to derive an equation describing the relationship between estimates generated using different methods. The method likely to yield the most accurate results is identified. A reconciliation of published estimates yields a value to a firm for a one-year reduction in regulatory delay at the time of approval of about $60 million for a typical drug. Published 2015. This article is a U.S. Government work and is in the public domain in the U.S.A. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  13. Surface Tension of Liquid Alkali, Alkaline, and Main Group Metals: Theoretical Treatment and Relationship Investigations

    NASA Astrophysics Data System (ADS)

    Aqra, Fathi; Ayyad, Ahmed

    2011-09-01

    An improved theoretical method for calculating the surface tension of liquid metals is proposed. A recently derived equation that allows an accurate estimate of surface tension to be made for the large number of elements, based on statistical thermodynamics, is used for a means of calculating reliable values for the surface tension of pure liquid alkali, alkaline earth, and main group metals at the melting point, In order to increase the validity of the model, the surface tension of liquid lithium was calculated in the temperature range 454 K to 1300 K (181 °C to 1027 °C), where the calculated surface tension values follow a straight line behavior given by γ = 441 - 0.15 (T-Tm) (mJ m-2). The calculated surface excess entropy of liquid Li (- dγ/ dT) was found to be 0.15 mJ m-2 K-1, which agrees well with the reported experimental value (0.147 mJ/m2 K). Moreover, the relations of the calculated surface tension of alkali metals to atomic radius, heat of fusion, and specific heat capacity are described. The results are in excellent agreement with the existing experimental data.

  14. Calculation and validation of heat transfer coefficient for warm forming operations

    NASA Astrophysics Data System (ADS)

    Omer, Kaab; Butcher, Clifford; Worswick, Michael

    2017-10-01

    In an effort to reduce the weight of their products, the automotive industry is exploring various hot forming and warm forming technologies. One critical aspect in these technologies is understanding and quantifying the heat transfer between the blank and the tooling. The purpose of the current study is twofold. First, an experimental procedure to obtain the heat transfer coefficient (HTC) as a function of pressure for the purposes of a metal forming simulation is devised. The experimental approach was used in conjunction with finite element models to obtain HTC values as a function of die pressure. The materials that were characterized were AA5182-O and AA7075-T6. Both the heating operation and warm forming deep draw were modelled using the LS-DYNA commercial finite element code. Temperature-time measurements were obtained from both applications. The results of the finite element model showed that the experimentally derived HTC values were able to predict the temperature-time history to within a 2% of the measured response. It is intended that the HTC values presented herein can be used in warm forming models in order to accurately capture the heat transfer characteristics of the operation.

  15. A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.

    PubMed

    Bartzsch, Stefan; Oelfke, Uwe

    2013-11-01

    The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.

  16. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  17. New positron emission tomography derived parameters as predictive factors for recurrence in resected stage I non-small cell lung cancer.

    PubMed

    Melloni, G; Gajate, A M S; Sestini, S; Gallivanone, F; Bandiera, A; Landoni, C; Muriana, P; Gianolli, L; Zannini, P

    2013-11-01

    The recurrence rate for stage I non-small cell lung cancer is high, with 20-40% of patients that relapse after surgery. The aim of this study was to evaluate new F-18 fluorodeoxyglucose (FDG) positron emission tomography (PET) derived parameters, such as standardized uptake value index (SUVindex), metabolic tumor volume (MTV) and total lesion glycolysis (TLG), as predictive factors for recurrence in resected stage I non-small cell lung cancer. We retrospectively reviewed 99 resected stage I non-small cell lung cancer patients that were grouped by SUVindex, TLG and MTV above or below their median value. Disease free survival was evaluated as primary end point. The 5-year overall survival and the 5-year disease free survival rates were 62% and 73%, respectively. The median SUVindex, MTL and TLG were 2.73, 2.95 and 9.61, respectively. Patients with low SUVindex, MTV and TLG were more likely to have smaller tumors (p ≤ 0.001). Univariate analysis demonstrated that SUVindex (p = 0.027), MTV (p = 0.014) and TLG (p = 0.006) were significantly related to recurrence showing a better predictive performance than SUVmax (p = 0.031). The 5-year disease free survival rates in patients with low and high SUVindex, MTV and TLG were 84% and 59%, 86% and 62% and 88% and 60%, respectively. The multivariate analysis showed that only TLG was an independent prognostic factor (p = 0.014) with a hazard ratio of 4.782. Of the three PET-derived parameters evaluated, TLG seems to be the most accurate in stratifying surgically treated stage I non-small cell lung cancer patients according to their risk of recurrence. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. In Silico and in Vitro Modeling of Hepatocyte Drug Transport Processes: Importance of ABCC2 Expression Levels in the Disposition of CarboxydichloroflurosceinS⃞

    PubMed Central

    Howe, Katharine; Gibson, G. Gordon; Coleman, Tanya; Plant, Nick

    2009-01-01

    The impact of transport proteins in the disposition of chemicals is becoming increasingly evident. Alteration in disposition can cause altered pharmacokinetic and pharmacodynamic parameters, potentially leading to reduced efficacy or overt toxicity. We have developed a quantitative in silico model, based upon literature and experimentally derived data, to model the disposition of carboxydichlorofluroscein (CDF), a substrate for the SLCO1A/B and ABCC subfamilies of transporters. Kinetic parameters generated by the in silico model closely match both literature and experimentally derived kinetic values, allowing this model to be used for the examination of transporter action in primary rat hepatocytes. In particular, we show that the in silico model is suited to the rapid, accurate determination of Ki values, using 3-[[3-[2-(7-chloroquinolin-2-yl)vinyl]phenyl]-(2-dimethylcarbamoylethylsulfanyl)methylsulfanyl] propionic acid (MK571) as a prototypical pan-ABCC inhibitor. In vitro-derived data are often used to predict in vivo response, and we have examined how differences in protein expression levels between these systems may affect chemical disposition. We show that ABCC2 and ABCC3 are overexpressed in sandwich culture hepatocytes by 3.5- and 2.3-fold, respectively, at the protein level. Correction for this in markedly different disposition of CDF, with the area under the concentration versus time curve and Cmax of intracellular CDF increasing by 365 and 160%, respectively. Finally, using kinetic simulations we show that ABCC2 represents a fragile node within this pathway, with alterations in ABCC2 having the most prominent effects on both the Km and Vmax through the pathway. This is the first demonstration of the utility of modeling approaches to estimate the impact of drug transport processes on chemical disposition. PMID:19022944

  19. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  20. The Business Value Web: Resourcing Business Processes and Solutions in Higher Education

    ERIC Educational Resources Information Center

    Norris, Donald M.; Olson, Mark A.

    2003-01-01

    Value is the benefit derived from an enterprise's assets by its stakeholders. For colleges and universities, value is derived by students, faculty, staff, other knowledge seekers, alumni, donors, suppliers, and stakeholders. They derive value through experiencing the institution's programs, services, knowledge assets, and other resources. This…

  1. Evaluating indices of lipid and protein content in lesser snow and Ross's geese during spring migration

    USGS Publications Warehouse

    Webb, Elisabeth B.; Fowler, Drew N.; Woodall, Brendan A.; Vrtiska, Mark P.

    2018-01-01

    Assessing nutrient stores in avian species is important for understanding the extent to which body condition influences success or failure in life‐history events. We evaluated predictive models using morphometric characteristics to estimate total body lipids (TBL) and total body protein (TBP), based on traditional proximate analyses, in spring migrating lesser snow geese (Anser caerulescens caerulescens) and Ross's geese (A. rossii). We also compared performance of our lipid model with a previously derived predictive equation for TBL developed for nesting lesser snow geese. We used external and internal measurements on 612 lesser snow and 125 Ross's geese collected during spring migration in 2015 and 2016 within the Central and Mississippi flyways to derive and evaluate predictive models. Using a validation data set, our best performing lipid model for snow geese better predicted TBL (root mean square error [RMSE] of 23.56) compared with a model derived from nesting individuals (RMSE = 48.60), suggesting the importance of season‐specific models for accurate lipid estimation. Models that included body mass and abdominal fat deposit best predicted TBL determined by proximate analysis in both species (lesser snow goose, R2 = 0.87, RMSE = 23.56: Ross's geese, R2 = 0.89, RMSE = 13.75). Models incorporating a combination of external structural measurements in addition to internal muscle and body mass best predicted protein values (R2 = 0.85, RMSE = 19.39 and R2 = 0.85, RMSE = 7.65, lesser snow and Ross's geese, respectively), but protein models including only body mass and body size were also competitive and provided extended utility to our equations for field applications. Therefore, our models indicated the importance of specimen dissection and measurement of the abdominal fat pad to provide the most accurate lipid estimates and provide alternative dissection‐free methods for estimating protein.

  2. Derivation of a regional active-optical reflectance sensor corn algorithm

    USDA-ARS?s Scientific Manuscript database

    Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...

  3. Dynamic Magnetostriction of CoFe2 O4 and Its Role in Magnetoelectric Composites

    NASA Astrophysics Data System (ADS)

    Aubert, A.; Loyau, V.; Pascal, Y.; Mazaleyrat, F.; LoBue, M.

    2018-04-01

    Applications of magnetostrictive materials commonly involve the use of the dynamic deformation, i.e., the piezomagnetic effect. Usually, this effect is described by the strain derivative ∂λ /∂H , which is deduced from the quasistatic magnetostrictive curve. However, the strain derivative might not be accurate to describe dynamic deformation in semihard materials as cobalt ferrite (CFO). To highlight this issue, dynamic magnetostriction measurements of cobalt ferrite are performed and compared with the strain derivative. The experiment shows that measured piezomagnetic coefficients are much lower than the strain derivative. To point out the direct application of this effect, low-frequency magnetoelectric (ME) measurements are also conducted on bilayers CFO /Pb (Zr ,Ti )O3 . The experimental data are compared with calculated magnetoelectric coefficients which include a measured dynamic coefficient and result in very low relative error (<5 %), highlighting the relevance of using a piezomagnetic coefficient derived from dynamic magnetostriction instead of a strain derivative coefficient to model ME composites. The magnetoelectric effect is then measured for several amplitudes of the alternating field Hac, and a nonlinear response is revealed. Based on these results, a trilayer CFO/Pb (Zr ,Ti )O3 /CFO is made exhibiting a high magnetoelectric coefficient of 578 mV /A (approximately 460 mV /cm Oe ) in an ac field of 38.2 kA /m (about 48 mT) at low frequency, which is 3 times higher than the measured value at 0.8 kA /m (approximately 1 mT). We discuss the viability of using semihard materials like cobalt ferrite for dynamic magnetostrictive applications such as the magnetoelectric effect.

  4. Fractional viscoelasticity in fractal and non-fractal media: Theory, experimental validation, and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Somayeh; Miles, Paul; Hussaini, M. Yousuff; Oates, William S.

    2018-02-01

    In this paper, fractional and non-fractional viscoelastic models for elastomeric materials are derived and analyzed in comparison to experimental results. The viscoelastic models are derived by expanding thermodynamic balance equations for both fractal and non-fractal media. The order of the fractional time derivative is shown to strongly affect the accuracy of the viscoelastic constitutive predictions. Model validation uses experimental data describing viscoelasticity of the dielectric elastomer Very High Bond (VHB) 4910. Since these materials are known for their broad applications in smart structures, it is important to characterize and accurately predict their behavior across a large range of time scales. Whereas integer order viscoelastic models can yield reasonable agreement with data, the model parameters often lack robustness in prediction at different deformation rates. Alternatively, fractional order models of viscoelasticity provide an alternative framework to more accurately quantify complex rate-dependent behavior. Prior research that has considered fractional order viscoelasticity lacks experimental validation and contains limited links between viscoelastic theory and fractional order derivatives. To address these issues, we use fractional order operators to experimentally validate fractional and non-fractional viscoelastic models in elastomeric solids using Bayesian uncertainty quantification. The fractional order model is found to be advantageous as predictions are significantly more accurate than integer order viscoelastic models for deformation rates spanning four orders of magnitude.

  5. The distribution of ticks (Acari: Ixodidae) of domestic livestock in Portugal.

    PubMed

    Estrada-Peña, Agustín; Santos-Silva, Maria Margarida

    2005-01-01

    This paper introduces the first countrywide faunistic study of the tick parasites on ruminants in Portugal. The aim of this study was to map accurately the distribution of the ticks Dermacentor marginatus, Rhipicephalus (Boophilus) annulatus, R. bursa, Hyalomma m. marginatum, H. lusitanicum and Ixodes ricinus in Portugal. Additional information about the abiotic preferences of these species has been obtained through the use of abiotic (temperature- and vegetation-derived) variables have been recorded from remotely sensed information at a nominal resolution of 1.1 km(2). A further aim was the development of predictive models of distribution using Classification and Regression Trees (CART) methodologies. Four species (R. annulatus, R. bursa, D. marginatus and H. m. marginatum) are mostly restricted to south-eastern parts of the country, under hot and dry climate conditions of Mediterranean type. H. lusitanicum has been collected almost only in the southern half of Portugal. I. ricinus has a very patchy distribution and is mainly associated with vegetation of Quercus spp., found in southern zones of the country, but it is present also in the more humid western part. A variable number of abiotic variables, mainly temperature derived, are able to describe the preferences of the tick species. It is remarkable that variables derived from maximum values of the Normalized Derived Vegetation Index (yearly or summer-derived) only apply to discriminate areas where I. ricinus has been collected. CART models are able to map the distribution of these ticks with accuracy ranging within 75.3 and 96.4% of actual positive sites.

  6. Forest/non-forest stratification in Georgia with Landsat Thematic Mapper data

    Treesearch

    William H. Cooke

    2000-01-01

    Geographically accurate Forest Inventory and Analysis (FIA) data may be useful for training, classification, and accuracy assessment of Landsat Thematic Mapper (TM) data. Minimum expectation for maps derived from Landsat data is accurate discrimination of several land cover classes. Landsat TM costs have decreased dramatically, but acquiring cloud-free scenes at...

  7. Accurate electric multipole moment, static polarizability and hyperpolarizability derivatives for N2

    NASA Astrophysics Data System (ADS)

    Maroulis, George

    2003-02-01

    We report accurate values of the electric moments, static polarizabilities, hyperpolarizabilities and their respective derivatives for N2. Our values have been extracted from finite-field Møller-Pleset perturbation theory and coupled cluster calculations performed with carefully designed basis sets. A large [15s12p9d7f] basis set consisting of 290 CGTF is expected to provide reference self-consistent-field values of near-Hartree-Fock quality for all properties. The Hartree-Fock limit for the mean hyperpolarizability is estimated at γ¯=715±4e4a04Eh-3 at the experimental bond length Re=2.074 32a0. Accurate estimates of the electron correlation effects were obtained with a [10s7p6d4f] basis set. Our best values are Θ=-1.1258ea02 for the quadrupole and Φ=-6.75ea04 for the hexadecapole moment, ᾱ=11.7709 and Δα=4.6074e2a02Eh-1 for the mean and the anisotropy of the dipole polarizability, C¯=41.63e2a04Eh-1 for the mean quadrupole polarizability and γ¯=927e4a04Eh-3 for the dipole hyperpolarizability. The latter value is quite close to Shelton's experimental estimate of 917±5e4a04Eh-3 [D. P. Shelton, Phys. Rev. A 42, 2578 (1990)]. The R dependence of all properties has been calculated with a [7s5p4d2f] basis set. At the CCSD(T) level of theory the dipole polarizability varies around Re as ᾱ(R)/e2a02Eh-1=11.8483+6.1758(R-Re)+0.9191(R-Re)2-0.8212(R-Re)3-0.0006(R-Re)4, Δα(R)/e2a02Eh-1=4.6032+7.0301(R-Re)+1.9340(R-Re)2-0.5708(R-Re)3+0.1949(R-Re)4. For the Cartesian components and the mean of γαβγδ, (dγzzzz/dR)e=1398, (dγxxxx/dR)e=867, (dγxxzz/dR)e=317, and (dγ¯/dR)e=994e4a03Eh-3. For the quadrupole polarizability Cαβ,γδ, we report (dCzz,zz/dR)e=19.20, (dCxz,xz/dR)e=16.55, (dCxx,xx/dR)e=10.20, and (dC¯/dR)e=23.31e2a03Eh-1. At the MP2 level of theory the components of the dipole-octopole polarizability (Eα,βγδ) and the mean dipole-dipole-octopole hyperpolarizability B¯ we have obtained (dEz,zzz/dR)e=36.71, (dEx,xxx/dR)e=-12.94e2a03Eh-1, and (dB¯/dR)e=-108e3a03Eh-2. In comparison with some other 14-electron systems, N2 appears to be less (hyper)polarizable than most, as near the Hartree-Fock limit we observe ᾱ(N2)<ᾱ(CO)<ᾱ(HCN)<ᾱ(BF)<ᾱ(HCCH) and γ¯(N2)<γ¯(CO)<γ¯(HCN)<γ¯(HCCH)<γ¯(BF).

  8. GPS receiver CODE bias estimation: A comparison of two methods

    NASA Astrophysics Data System (ADS)

    McCaffrey, Anthony M.; Jayachandran, P. T.; Themens, D. R.; Langley, R. B.

    2017-04-01

    The Global Positioning System (GPS) is a valuable tool in the measurement and monitoring of ionospheric total electron content (TEC). To obtain accurate GPS-derived TEC, satellite and receiver hardware biases, known as differential code biases (DCBs), must be estimated and removed. The Center for Orbit Determination in Europe (CODE) provides monthly averages of receiver DCBs for a significant number of stations in the International Global Navigation Satellite Systems Service (IGS) network. A comparison of the monthly receiver DCBs provided by CODE with DCBs estimated using the minimization of standard deviations (MSD) method on both daily and monthly time intervals, is presented. Calibrated TEC obtained using CODE-derived DCBs, is accurate to within 0.74 TEC units (TECU) in differenced slant TEC (sTEC), while calibrated sTEC using MSD-derived DCBs results in an accuracy of 1.48 TECU.

  9. A comparison of electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry for flow measurements

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Stricker, J.

    1985-01-01

    Electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry are compared as methods for the accurate measurement of refractive index and density change distributions of phase objects. Experimental results are presented to show that the two methods have comparable accuracy for measuring the first derivative of the interferometric fringe shift. The phase object for the measurements is a large crystal of KD*P, whose refractive index distribution can be changed accurately and repeatably for the comparison. Although the refractive index change causes only about one interferometric fringe shift over the entire crystal, the derivative shows considerable detail for the comparison. As electronic phase measurement methods, both methods are very accurate and are intrinsically compatible with computer controlled readout and data processing. Heterodyne moire is relatively inexpensive and has high variable sensitivity. Heterodyne holographic interferometry is better developed, and can be used with poor quality optical access to the experiment.

  10. Urban change detection procedures using Landsat digital data

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.; Toll, D. L.

    1982-01-01

    Landsat multispectral scanner data was applied to an urban change detection problem in Denver, CO. A dichotomous key yielding ten stages of residential development at the urban fringe was developed. This heuristic model allowed one to identify certain stages of development which are difficult to detect when performing digital change detection using Landsat data. The stages of development were evaluated in terms of their spectral and derived textural characteristics. Landsat band 5 (0.6-0.7 micron) and texture data produced change detection maps which were approximately 81 percent accurate. Results indicated that the stage of development and the spectral/textural features affect the change in the spectral values used for change detection. These preliminary findings will hopefully prove valuable for improved change detection at the urban fringe.

  11. Hexagonal boron nitride and water interaction parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yanbin; Aluru, Narayana R., E-mail: aluru@illinois.edu; Wagner, Lucas K.

    2016-04-28

    The study of hexagonal boron nitride (hBN) in microfluidic and nanofluidic applications at the atomic level requires accurate force field parameters to describe the water-hBN interaction. In this work, we begin with benchmark quality first principles quantum Monte Carlo calculations on the interaction energy between water and hBN, which are used to validate random phase approximation (RPA) calculations. We then proceed with RPA to derive force field parameters, which are used to simulate water contact angle on bulk hBN, attaining a value within the experimental uncertainties. This paper demonstrates that end-to-end multiscale modeling, starting at detailed many-body quantum mechanics andmore » ending with macroscopic properties, with the approximations controlled along the way, is feasible for these systems.« less

  12. SHERMAN, a shape-based thermophysical model. I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.

    2018-03-01

    SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.

  13. Refractive index investigation of poly(vinyl alcohol) films with TiO2 nanoparticle inclusions.

    PubMed

    Yovcheva, Temenuzhka; Vlaeva, Ivanka; Bodurov, Ivan; Dragostinova, Violeta; Sainov, Simeon

    2012-11-10

    The refractive index (RI) of polymer nanocomposite of poly(vinyl alcohol) films with TiO(2) nanoparticle inclusions with low concentration up to 1.2 wt. % was investigated. Accurate refractometric measurements, by a specially designed laser microrefractometer, were performed at wavelengths 532 and 632.8 nm. The influence of TiO(2) concentration on the RI dispersion curves was predicted based on the well-known Sellmeier model. The theoretical analysis, in a small filling factor approximation, was performed, and a relation between the effective RI of the nanocomposite and weight concentrations of the TiO(2) nanofiller was derived. The experimental values were approximated by two different functions (linear and a quadratic polynom). The polynomial approximation yields better result, where R(2)=0.90.

  14. Analysis and optimization of population annealing

    NASA Astrophysics Data System (ADS)

    Amey, Christopher; Machta, Jonathan

    2018-03-01

    Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.

  15. Joint cosmic microwave background and weak lensing analysis: constraints on cosmological parameters.

    PubMed

    Contaldi, Carlo R; Hoekstra, Henk; Lewis, Antony

    2003-06-06

    We use cosmic microwave background (CMB) observations together with the red-sequence cluster survey weak lensing results to derive constraints on a range of cosmological parameters. This particular choice of observations is motivated by their robust physical interpretation and complementarity. Our combined analysis, including a weak nucleosynthesis constraint, yields accurate determinations of a number of parameters including the amplitude of fluctuations sigma(8)=0.89+/-0.05 and matter density Omega(m)=0.30+/-0.03. We also find a value for the Hubble parameter of H(0)=70+/-3 km s(-1) Mpc(-1), in good agreement with the Hubble Space Telescope key-project result. We conclude that the combination of CMB and weak lensing data provides some of the most powerful constraints available in cosmology today.

  16. A modified Holly-Preissmann scheme for simulating sharp concentration fronts in streams with steep velocity gradients using RIV1Q

    NASA Astrophysics Data System (ADS)

    Liu, Zhao-wei; Zhu, De-jun; Chen, Yong-can; Wang, Zhi-gang

    2014-12-01

    RIV1Q is the stand-alone water quality program of CE-QUAL-RIV1, a hydraulic and water quality model developed by U.S. Army Corps of Engineers Waterways Experiment Station. It utilizes an operator-splitting algorithm and the advection term in governing equation is treated using the explicit two-point, fourth-order accurate, Holly-Preissmann scheme, in order to preserve numerical accuracy for advection of sharp gradients in concentration. In the scheme, the spatial derivative of the transport equation, where the derivative of velocity is included, is introduced to update the first derivative of dependent variable. In the stream with larger cross-sectional variation, steep velocity gradient can be easily found and should be estimated correctly. In the original version of RIV1Q, however, the derivative of velocity is approximated by a finite difference which is first-order accurate. Its leading truncation error leads to the numerical error of concentration which is related with the velocity and concentration gradients and increases with the decreasing Courant number. The simulation may also be unstable when a sharp velocity drop occurs. In the present paper, the derivative of velocity is estimated with a modified second-order accurate scheme and the corresponding numerical error of concentration decreases. Additionally, the stability of the simulation is improved. The modified scheme is verified with a hypothetical channel case and the results demonstrate that satisfactory accuracy and stability can be achieved even when the Courant number is very low. Finally, the applicability of the modified scheme is discussed.

  17. Improved symbol rate identification method for on-off keying and advanced modulation format signals based on asynchronous delayed sampling

    NASA Astrophysics Data System (ADS)

    Cui, Sheng; Jin, Shang; Xia, Wenjuan; Ke, Changjian; Liu, Deming

    2015-11-01

    Symbol rate identification (SRI) based on asynchronous delayed sampling is accurate, cost-effective and robust to impairments. For on-off keying (OOK) signals the symbol rate can be derived from the periodicity of the second-order autocorrelation function (ACF2) of the delay tap samples. But it is found that when applied this method to advanced modulation format signals with auxiliary amplitude modulation (AAM), incorrect results may be produced because AAM has significant impact on ACF2 periodicity, which makes the symbol period harder or even unable to be correctly identified. In this paper it is demonstrated that for these signals the first order autocorrelation function (ACF1) has stronger periodicity and can be used to replace ACF2 to produce more accurate and robust results. Utilizing the characteristics of the ACFs, an improved SRI method is proposed to accommodate both OOK and advanced modulation formant signals in a transparent manner. Furthermore it is proposed that by minimizing the peak to average ratio (PAPR) of the delay tap samples with an additional tunable dispersion compensator (TDC) the limited dispersion tolerance can be expanded to desired values.

  18. Accuracy of the HST Standard Astrometric Catalogs w.r.t. Gaia

    NASA Astrophysics Data System (ADS)

    Kozhurina-Platais, V.; Grogin, N.; Sabbi, E.

    2018-02-01

    The goal of astrometric calibration of the HST ACS/WFC and WFC3/UVIS imaging instruments is to provide a coordinate system free of distortion to the precision level of 0.1 pixel 4-5 mas or better. This astrometric calibration is based on two HST astrometric standard fields in the vicinity of the globular clusters, 47 Tuc and omega Cen, respectively. The derived calibration of the geometric distortion is assumed to be accurate down to 2-3 mas. Is this accuracy in agreement with the true value? Now, with the access to globally accurate positions from the first Gaia data release (DR1), we found that there are measurable offsets, rotation, scale and other deviations of distortion parameters in two HST standard astrometric catalogs. These deviations from the distortion-free and properly aligned coordinate system should be accounted and corrected for, so that the high precision HST positions are free of any systematic errors. We also found that the precision of the HST pixel coordinates is substantially better than the accuracy listed in the Gaia DR1. Therefore, in order to finalize the components of distortion in the HST standard catalogs, the next release of Gaia data is needed.

  19. Interference effects in phased beam tracing using exact half-space solutions.

    PubMed

    Boucher, Matthew A; Pluymers, Bert; Desmet, Wim

    2016-12-01

    Geometrical acoustics provides a correct solution to the wave equation for rectangular rooms with rigid boundaries and is an accurate approximation at high frequencies with nearly hard walls. When interference effects are important, phased geometrical acoustics is employed in order to account for phase shifts due to propagation and reflection. Error increases, however, with more absorption, complex impedance values, grazing incidence, smaller volumes and lower frequencies. Replacing the plane wave reflection coefficient with a spherical one reduces the error but results in slower convergence. Frequency-dependent stopping criteria are then applied to avoid calculating higher order reflections for frequencies that have already converged. Exact half-space solutions are used to derive two additional spherical wave reflection coefficients: (i) the Sommerfeld integral, consisting of a plane wave decomposition of a point source and (ii) a line of image sources located at complex coordinates. Phased beam tracing using exact half-space solutions agrees well with the finite element method for rectangular rooms with absorbing boundaries, at low frequencies and for rooms with different aspect ratios. Results are accurate even for long source-to-receiver distances. Finally, the crossover frequency between the plane and spherical wave reflection coefficients is discussed.

  20. Quantitative analysis of polypeptide pharmaceuticals by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Amini, Ahmad; Nilsson, Elin

    2008-02-13

    An accurate method based on matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) has been developed for quantitative analysis of calcitonin and insulin in different commercially available pharmaceutical products. Tryptic peptides derived from these polypeptides were chemically modified at their C-terminal lysine-residues with 2-methoxy-4,5-dihydro-imidazole (light tagging) as standard and deuterated 2-methoxy-4,5-dihydro-imidazole (heavy tagging) as internal standard (IS). The heavy modified tryptic peptides (4D-Lys tag), differed by four atomic mass units from the corresponding light labelled counterparts (4H-Lys tag). The normalized peak areas (the ratio between the light and heavy tagged peptides) were used to construct a standard curve to determine the concentration of the analytes. The concentrations of calcitonin and insulin content of the analyzed pharmaceutical products were accurately determined, and less than 5% error was obtained between the present method and the manufacturer specified values. It was also found that the cysteine residues in CSNLSTCVLGK from tryptic calcitonin were converted to lanthionine by the loss of one sulfhydryl group during the labelling procedure.

Top